text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Summary: Read 10 tips about how to migrate Microsoft Visual Basic for Applications code to Microsoft Visual Basic code by using Microsoft Visual Studio 2005 Tools Second Edition for the Microsoft Office system. (19
Migrating to Visual Studio
Tip #1: Call Managed Code from Command Bar Controls
Tip #2: Customizing the Office Fluent Ribbon
Tip #3: Learn How to Create Windows Forms
Tip #4: Add Custom Task Panes
Tip #5: Convert Existing VBA Code
Read part two: 10 Tips for Migrating VBA to Visual Studio 2005 Tools for the Office System SE (Part 2 of 2).
This technical article is for developers who create solutions by using Microsoft Visual Basic for Applications (VBA) and who want to migrate their code to the managed code environment provided by Microsoft Visual Studio and the Microsoft .NET Framework. Because we (the authors of this paper) are developers who work in both VBA and Visual Studio, we understand the difficulties and challenges that you face.
In this article, we share what we learned when we moved from VBA to Microsoft Visual Studio 2005 Tools Second Edition for the 2007 Microsoft Office system. The tips in this article focus on what you need to understand as you migrate to the new platform. This article presents conversion scenarios, discusses pitfalls, and offers advice to help you successfully migrate your code. Although this article is not an in-depth reference, the tips and suggestions help you get started. Throughout the article, we provide links to other references that help you make the transition from VBA to Visual Basic.
In this article, VBA refers to Visual Basic for Applications and Visual Basic refers to the programming language you use in Visual Studio. When necessary, we explicitly refer to Visual Basic 6.0.
We assume that you can create and debug add-ins by using Visual Studio 2005 Tools for Office Second Edition (SE). For more information about downloading, installing, and using Visual Studio 2005 Tools for Office SE, see How to: Create Visual Studio Tools for Office Products. For more information about the Visual Studio 2005 user interface, see Don’t Freak Out About Visual Studio.
In many VBA applications, the user clicks a command bar button to launch your custom code. One of the first things you need to learn is how to provide a user interface element so the user can run your code. In the 2007 Microsoft Office system, custom command bars do not look like they look in Microsoft Office 2003. Instead of appearing as separate toolbars, custom command bars are grouped together on the Add-Ins tab, as shown in Figure 1.
Although we recommend that you learn how to customize the Microsoft Office Fluent user interface (see Tip #2), you can continue to use the CommandBars object model in Visual Studio 2005 Tools for Office SE. To make your custom buttons work, you need to change your existing code so that clicking the button launches managed code instead of a macro. When you use the CommandBars object model, your custom buttons appear in the Add-Ins tab in a 2007 Office system application. Customizing the Office Fluent Ribbon offers you more flexibility because you can control where your buttons or UI elements appear.
Because the 2007 Office system still exposes the CommandBar object model, you can dynamically add command bars and their controls when your application loads. You can also assign a variable that points to an existing CommandBar object or CommandBarControl object. However, you cannot use the OnAction property to determine the click action for a button. Instead, you need to write an event handler and attach it to the button’s Click event.
Suppose that you have a VBA add-in and you want to migrate it to Visual Studio 2005 Tools for Office SE but, for now, you want to continue to use the CommandBar object model. The following procedure shows you how to create a simple add-in that demonstrates this capability.
In Visual Studio 2005, create a 2007 Add-in project and use the Excel Add-in template. For more information about creating add-ins, see How to: Create Visual Studio Tools for Office Products.
In the Solution Explorer, double-click the ThisAddIn.vb class file.
Add the following declaration inside the ThisAddIn class, but outside any procedure.
Private WithEvents sheetInfoButton As Office.CommandBarButton
Add the following procedure to the ThisAddIn class.
Private Sub SetupCommandBars()
Dim commandBar As Office.CommandBar = _
Application.CommandBars. _
Add("VSTOAddinToolbar", _
Office.MsoBarPosition.msoBarTop, , True)
commandBar.Visible = True
' Add a button with an icon that looks like a report.
sheetInfoButton = _
CType(commandBar.Controls.Add( _
Office.MsoControlType.msoControlButton), _
Office.CommandBarButton)
sheetInfoButton.Tag = "Display Sheet Info"
sheetInfoButton.FaceId = 2522 'or 2160, 2522, 2950
sheetInfoButton.TooltipText = "Display Sheet Info"
sheetInfoButton.DescriptionText = _
"List all the sheets in the workbook."
End Sub
The code first obtains a reference to the top-level CommandBars object and adds a command bar to the set of Office Excel command bars.
Dim commandBar As Office.CommandBar = _
Application.CommandBars. _
Add("VSTOAddinToolbar", _
Office.MsoBarPosition.msoBarTop, , True)
commandBar.Visible = True
Visual Studio 2005 Tools for Office SE automatically defines the Application reference for you. The reference always refers to the host application, just as it does within VBA programs.
Although it looks like the code refers to standard objects from the Microsoft Office type library, it really refers to objects provided by a managed wrapper around the type library. For example, the CommandBars object is part of the Microsoft.Office.Core namespace. To avoid having to type the full name of the class (Microsoft.Office.Core.CommandBars) each time you want to refer to the class, the Visual Studio 2005 Tools for Office SE project templates include project-wide Imports statements. In this case, the project template automatically includes the code Imports Office=Microsoft.Office.Core.
Imports
Imports Office=Microsoft.Office.Core
That way, when you refer to Office.CommandBars, you actually refer to the complete reference. The Imports statement saves time as you write code. You can also add your own Imports statements to the top of any code file. For more information about importing a namespace, see Tip #7: Learn the .NET Framework in 10 Tips for Migrating VBA to Visual Studio 2005 Tools for the Office System SE (Part 2 of 2).
The next block of code creates a CommandBarButton control. The code uses the CType method to convert the return value of the Add method (a CommandBarControl object) to the specific CommandBarButton type. The code can do this because you create a CommandBarButton object.
sheetInfoButton = _
CType(commandBar.Controls.Add( _
Office.MsoControlType.msoControlButton), _
Office.CommandBarButton)
The code next sets properties of the CommandBarButton control.
sheetInfoButton.Tag = "Display Sheet Info"
sheetInfoButton.FaceId = 2522 'or 2160, 2522, 2950
sheetInfoButton.TooltipText = "Display Sheet Info"
sheetInfoButton.DescriptionText = _
"List all the sheets in the workbook."
You still need to provide an event handler for the button’s Click event. In VBA, the code editor creates the event handler for you. Because you specified WithEvents when you declared sheetInfoButton in Step 3, Visual Studio creates the event handler for you as well.
If you do not want to use the WithEvents keyword to indicate that the control needs an event handler, you can use the AddHandler statement. This statement requires that you specify a particular event (sheetInfoButton.Click, in this case), along with the address of a procedure that the add-in runs when the event occurs. You use the AddressOf keyword to indicate that what follows is the name of the procedure to run. The procedure you specify must meet stringent requirements: It must have the exact procedure signature that the event expects to find. It is easier to use the WithEvents keyword, but sometimes you need more explicit control over how and when to attach event handlers to controls.
AddHandler
At the top of the code window, in the drop-down list on the left, select sheetInfoButton. In the list, select the Click event. Visual Studio inserts the sheetInfoButton_Click procedure, including the correct set of parameters. Modify the procedure so that it looks like this.
Private Sub sheetInfoButton_Click( _
ByVal Ctrl As Office.CommandBarButton, _
ByRef CancelDefault As Boolean) _
Handles sheetInfoButton.Click
Dim sw As New System.IO.StringWriter
For Each sheet As Excel.Worksheet In Application.Worksheets
sw.WriteLine(sheet.Name)
MsgBox(sw.ToString, MsgBoxStyle.OkOnly, "Sheet Info")
End Sub
The code in the sheetInfoButton_Click procedure loops through all the sheets in the workbook and adds the name of each to a memory buffer. The code then uses the MsgBox method to display the contents of the memory buffer. (You can also use the .NET Framework method MessageBox.Show, to accomplish the same goal). The System.IO.StringWriter class provides a convenient mechanism for writing text with multiple lines into a memory buffer—you can use the ToString method of the class to retrieve all the text as a single string, as in this example. When you click the command bar button, the code in the sheetInfoButton_Click event handler runs and displays an alert with the names of all the sheets in the workbook.
In the ThisAddIn_Startup procedure, after the existing code, add the following code, to set up the command bars.
SetupCommandBars()
Save and run the project.
In Office Excel, click the Add-ins tab. You see your button in the Custom Toolbars group. Click the button, and you should see results shown in Figure 2.
Exit Office Excel and return to Visual Studio. Save your add-in because you use it again in the next section.
For a review of the Office CommandBars object model, read How to: Create Office Menus Programmatically, which contains an example of creating a command bar and button by using WithEvents to handle the button’s click event.
If you want more control over where your buttons appear in a 2007 Office system application, you need to write code to customize the Office Fluent Ribbon. The Ribbon is a component of the Microsoft Office Fluent user interface, introduced in the 2007 Microsoft Office system. The Office Fluent Ribbon changes the way users interact with the host application’s menu system. Rather than grouping items by functionality into static, layered menus, the Ribbon works with you as you interact with the application. As you modify a table in Word 2007, the Office Fluent Ribbon displays tools that help in the current context. If you select a style in the Ribbon, Word 2007 updates the style of the selection even before you close the gallery of styles, so you see the effect of updating in real time.
You can modify almost any facet of the Office Fluent Ribbon. You can add your own tabs, groups, and controls. You can hide built-in controls or override the behavior for built-in controls. In this article, you learn how to use Visual Studio 2005 Tools for Office Second Edition to provide a platform for customizing the Ribbon. For more information about the Ribbon, see the Office Fluent User Interface Developer Portal.
In general, you can customize the Office Fluent Ribbon in two different ways. You can:
Add Ribbon markup to Open XML Format files (Microsoft Word 2007, Microsoft Excel 2007, Microsoft PowerPoint 2007) directly, by inserting the XML part into the document. In this case, you generally handle Ribbon interaction and events by using VBA code in the document itself.
Create a COM add-in that provides the Ribbon markup and event-handling code.
When you customize the Ribbon, you must provide XML markup that defines the content of the customization and you must provide code (either in VBA, or in managed code) that both reacts to Office Fluent Ribbon control events and provides dynamic content.
If you create a COM add-in, you can create a Visual Studio 2005 shared add-in, or you can use Visual Studio 2005 Tools for Office SE to create the add-in. Using Visual Studio 2005 Tools for Office SE is easier and more robust, and is the technique presented in this article.
Because Office Fluent Ribbon customization is a complicated topic, this article focuses on a simple customization: the one that Visual Studio 2005 Tools for Office SE provides when you add Ribbon support to your add-in. The following procedure shows you how to create a simple Ribbon customization
Using the add-in that you created in the previous example, on the Project menu, click Add New Item.
In the Add New Item dialog box, click Ribbon Support. Accept the default name (Ribbon1.vb), and click Add. This action adds two new items to your project: Ribbon1.vb and Ribbon1.xml.
Open Ribbon1.xml. It contains the following declarative information about the items that appear on the Add-Ins tab on the Ribbon.
>
Although you need to study the documentation on Ribbon markup to understand this XML completely, it is easy to understand what the markup does. This XML adds a new group named MyGroup to the Add-Ins tab. The group has a toggle button labeled My Button, and on the button is an image of a happy face. Clicking the button executes a procedure in the add-in named OnToggleButton1.
In Solution Explorer, double-click Ribbon1.vb to examine its code.
The first portion of the Ribbon1.vb file contains a partial class that is commented out. This code connects the Office Fluent Ribbon named Ribbon1 to the add-in, and indicates to Visual Studio 2005 Tools for Office SE where to find this particular Ribbon customization. Because Visual Studio 2005 Tools for Office SE cannot predict what your add-in does, this code is commented out. You must uncomment the code for the add-in to work. Select the entire partial class named ThisAddIn (but not the comments immediately before it), and click the Uncomment the selected lines toolbar button, as shown in Figure 3.
Scroll down in the Ribbon1.vb file to find the Ribbon1 class, which describes the behavior of your Office Fluent Ribbon customization. Expand the Callbacks section to find the OnToggleButton1 procedure. This code uses the isPressed parameter to display different text when the user presses and releases the toggle button.
Expand the Helpers code region to find the GetResourceText procedure, which the add-in uses to load the Office Fluent Ribbon’s XML markup, so that the host application can display the Ribbon customization.
For an easier way to load the XML content, see the references that follow for more information and details on customizing the Office Fluent Ribbon.
Save and run your add-in. In Office Excel, click the Add-Ins tab to see the new button on the Office Fluent Ribbon. When you click the button, the OnToggleButton1 procedure runs and you see the alert shown in Figure 4.
Exit Office Excel and return to Visual Studio.
For more information about working with the Office 2007 Ribbon, see the following resources:
Customizing the 2007 Office Fluent User Interface Using Visual Studio 2005 Tools for the Office System SE (Part 1 of 2)
Customizing the 2007 Office Fluent User Interface Using Visual Studio 2005 Tools for the Office System SE (Part 2 of 2)
Customizing the 2007 Office Fluent Ribbon for Developers (Part 1 of 3)
Customizing the 2007 Office Fluent Ribbon for Developers (Part 2 of 3)
Customizing the 2007 Office Fluent Ribbon for Developers (Part 3 of 3)
Custom Task Panes, the Office Fluent Ribbon, and Reusing VBA Code in the 2007 Office System
Developer Overview of the User Interface for the 2007 Microsoft Office System.
Although much of the code in your VBA modules translates almost directly to Visual Basic, user form code does not. If your VBA application uses forms, you need to recreate the user interface in Visual Basic by using Windows Forms. You can view this downside as an opportunity, though: Windows Forms provides more controls and features than are available in VBA user forms.
When you design your application, be aware that a custom task pane might be a better user interface choice.
Although designing Windows Forms in Visual Studio 2005 Tools for Office SE is a different experience than designing forms in VBA, Windows Forms is far more flexible. It is easy to create and display Windows Forms as part of a Visual Studio 2005 Tools for Office SE add-in. This tip points out some of the differences between user forms and Windows Forms, and shows you how to create your own simple form.
One difference between user forms and Windows Forms is that user forms are modal by default. This means that when a user form is displayed, you cannot access the host application until you hide or close the user form. Windows Forms are modeless by default.
Windows Forms provide most of the controls you find in the VBA user forms toolbox.
ToggleButton
Use a CheckBox control with its Appearance property set to Button, rather than the default Normal.
OptionButton
Use a RadioButton control.
Frame
Use a GroupBox control. (This control also acts as a container for RadioButton controls, allowing only a single RadioButton control to be selected at a time.)
CommandButton
Use a Button control.
TabStrip or MultiPage
Use a TabControl control. The behavior is somewhat different, but the concepts are the same.
Scrollbar
Use the HScrollBar or VScrollBar control for horizontal or vertical scroll bars.
SpinButton
Use the NumericUpDown or DomainUpDown control, depending on whether you want to spin through numeric values, or text values.
Image
Use the PictureBox control.
Each of the Windows Forms controls is slightly different from its user form equivalent. The properties, methods, and events are all different enough that you need to refer to the documentation. In general, however, they are functionally equivalent.
Suppose that you want an Office Excel add-in to display a form that allows users to select a date to insert in the current cell. Although you could use a Microsoft ActiveX control on a user form to accomplish this goal, the task is quite simple using Windows Forms. The following procedure shows you how to modify the example you created in the previous tip to display a Windows Form.
Start with the add-in you created earlier, or create an add-in for Excel 2007, and add Office Fluent Ribbon customization support. As discussed previously, make sure you uncomment the ThisAddIn partial class in the Ribbon1.vb code file.
In Ribbon1.xml, change the markup so that the toggle button becomes a regular button, and replace the OnAction attribute with a different procedure name. When you are done, the markup for the toggleButton element should look like the following.
<button id="myButton"
size="large"
label="My Button"
screentip="My Button Screentip"
onAction="OnClick"
imageMso="HappyFace" />
On the Project menu, click Add Windows Form. Click Add to create a new Windows Form named Form1.vb in your add-in. Set properties of the form as shown in the following table.
MaximizeBox
False
MinimizeBox
Text
Select a Date
From the Toolbox window, drag a MonthCalendar control onto the new form. (If you do not see the Toolbox window, on the View menu, click Toolbox) Resize the form until it is big enough for the control, as shown in Figure 5.
Double-click the MonthCalendar control. This action creates an event handler for the control’s DateChanged event. Modify the MonthCalendar1_DateChanged procedure, adding the following code.
Globals.ThisAddIn.Application.ActiveCell.Value = _
MonthCalendar1.SelectionRange.Start.ToShortDateString
Me.Close()
The add-in template provides the Globals class. The Globals class allows you to access the ThisAddIn class, which you can use to access the host’s Application object. Using the host’s Application object, the code finds the ActiveCell reference, and sets its Value property to the selected date, converted to a string.
If, in the future, you plan to convert this code to Microsoft Visual C#, we advise you to replace all references to the Value property with the similar Value2 property. The Value property accepts a parameter, and Visual C# does not support parameterized properties. Because of this, Visual C# does not recognize the Value property. Using Value2 in your Visual Basic code simplifies conversion to Visual C#.
After inserting the selected date, the calendar form closes.
In the Ribbon1.vb file, add the following procedure within the Ribbon1 class after the OnToggleButton1 procedure.
Public Sub OnClick(ByVal control As Office.IRibbonControl)
Using demoForm As New Form1
demoForm.ShowDialog()
End Using
End Sub
This code creates an instance of the Form1 class that you just created, and ensures that the common language runtime destroys the form when you close it—the Using block handles this for you. Inside the Using block, the code calls the ShowDialog method of the form, which displays the form modally, much like a user form.
Using
Save and run the solution. In Office Excel, on the Add-ins tab, click the new button. The calendar form appears. Select a date, and the form inserts the selected date into the current cell, and closes the form.
You can do much more with Windows Forms in Visual Studio 2005 Tools for Office SE add-ins, but this simple example helps you get started.
The operating system determines where the form is displayed. If you want to make the form appear within the boundaries of Office Excel, you need more complex code. For more information, see the newsgroup posting How to get IWin32Window for Show/ShowDialog in Excel.
For more information about getting started with Windows Forms, see the following references:
Windows Forms for Visual Basic 6.0 Users
is a good reference if you are familiar with both Visual Basic 6.0 forms and VBA user forms.
Windows Forms
is a portal to articles, how tos, and reference materials related to Windows Forms in Visual Studio 2005.
You may find that displaying Windows Forms does not meet your needs. Perhaps you want to display a task pane, docked to the edge of the host application window, and add your content there. Custom task panes make this possible and they are easy to create.
The following 2007 Office system applications support custom task panes: Microsoft Office Access, Office Excel, Microsoft Office InfoPath, Microsoft Office Outlook, Microsoft Office PowerPoint, and Office Word. Some 2007 Office system applications, such as Microsoft Office Visio 2007, do not support them.
When you create a custom task pane, you write code to determine exactly what appears on the task pane at runtime. Creating a custom control is easier, especially if you want to display more than a single control on the task pane—that is, a control that combines many other controls. Then, when it comes time to display the task pane, you can add an instance of your custom control, instead of adding each control individually, in code. The following procedure shows you how to create a simple custom task pane that allows you to insert a date, as in the previous tip, to a docked task pane.
Start with the add-in you created earlier, or create an add-in for Excel 2007.
On the Project menu, click Add User Control. In the Add New Item dialog box, name your user control CalendarTaskPaneControl, and click Add when you are done. Visual Studio displays a designer for your control.
Select the control designer, and set its properties as shown in the following table:
Font
Segoe UI, 10pt
Size
232, 400
From the Toolbox, drag a Label control onto the control designer. Set its properties as shown in the following table:
AutoSize
Dock
Top
Date Selector Task Pane
Text Align
BottomCenter
Drag a MonthCalendar control onto the control designer, and place it below the Label control.
Drag a Button control onto the control designer, and place it below the MonthCalendar control. Set the button’s Anchor property to Top, Left, Right. Set its Text property to Insert Selected Date. When you finish the user control looks like Figure 6.
Double-click the button you created, and add the following code to the button’s Click event handler.
Globals.ThisAddIn.Application.ActiveCell.Value = _
MonthCalendar1.SelectionRange.Start.ToShortDateString()
Within the ThisAddIn.vb file, in the ThisAddIn_Startup procedure, add the following code at the end of the procedure. This code creates the custom task pane, adds it to the application’s collection of custom task panes, sets its width, and displays it.
Dim ctp As Microsoft.Office.Tools.CustomTaskPane = _
Me.CustomTaskPanes.Add( _
New CalendarTaskPaneControl, "Calendar Task Pane")
ctp.Width = 232
ctp.Visible = True
Save and run the project. When the application starts, you see the custom task pane shown in Figure 7. Select a date, and click the button. The task pane inserts the selected date into the worksheet.
Custom task panes have many benefits. Using custom task panes, you can:
Install as many task panes as you like.
Display multiple task panes concurrently.
Show and hide task panes as necessary.
Dock, move, and resize task panes.
Customize task panes at run time by using code.
For more information about custom task panes, see the following resources:
Creating Custom Task Panes Using Visual Studio 2005 Tools for the Office System SE
Managing Task Panes in Multiple Word and InfoPath Documents
Creating Custom Task Panes in the 2007 Office System
As a VBA developer, you already know the object models for the Microsoft Office products you use. You probably also have a large library of VBA code that you use as a resource when you create applications. As you migrate to Visual Studio 2005 Tools for Office SE, you may think that you need to rewrite your code to make it work in Visual Basic. Fortunately, for the most part, you can copy your VBA code into Visual Studio and it works with no or minor changes. (If your VBA code uses classes, it is more likely that it translates directly to Visual Basic.) A common problem when migrating VBA code to Visual Basic is ambiguous references. If the Visual Basic compiler cannot figure out where to find the value for a variable, you need to help the compiler resolve (or disambiguate) the reference. This is discussed in the example in this tip.
For example, suppose you have a VBA function named CreateOutline that you wrote for Microsoft Office Word. The CreateOutline function builds a new document that contains an outline of the current document, and you want to migrate this function to a Visual Studio 2005 Tools for Office SE add-in. The CreateOutline function calls another function named GetLevel. The code for both functions follows.
Public Sub CreateOutline()
Dim docOutline As Word.Document
Dim docSource As Word.Document
Dim rng As Word.Range
Dim astrHeadings As Variant
Dim strText As String
Dim intLevel As Integer
Dim intItem As Integer
Set docSource = ActiveDocument
Set docOutline = Documents.Add
' Content returns only the
' main body of the document, not
' the headers and footer.
Set rng = docOutline.Content
astrHeadings = _
docSource.GetCrossReferenceItems(wdRefTypeHeading)
For intItem = LBound(astrHeadings) To UBound(astrHeadings)
' Get the text and the level.
strText = Trim$(astrHeadings(intItem))
intLevel = GetLevel(CStr(astrHeadings(intItem)))
' Add the text to the document.
rng.InsertAfter strText & vbNewLine
' Set the style of the selected range and
' then collapse the range for the next entry.
rng.Style = "Heading " & intLevel
rng.Collapse wdCollapseEnd
Next intItem
End Sub
Private Function GetLevel(strItem As String) As Integer
' Return the heading level of a header from the
' array returned by Word.
' The number of leading spaces indicates the
' outline level (2 spaces per level: H1 has
' 0 spaces, H2 has 2 spaces, H3 has 4 spaces.
Dim strTemp As String
Dim strOriginal As String
Dim intDiff As Integer
' Get rid of all trailing spaces.
strOriginal = RTrim$(strItem)
' Trim leading spaces, and then compare with
' the original.
strTemp = LTrim$(strOriginal)
' Subtract to find the number of
' leading spaces in the original string.
intDiff = Len(strOriginal) - Len(strTemp)
GetLevel = (intDiff / 2) + 1
End Function
To test these functions, start Office Word and copy them into a new module in the Visual Basic Editor. Create a test document that has some headings (use styles such as Heading 1 and Heading 2) and some text. With your cursor in the CreateOutline function, press F5. The function creates a document that contains an outline of your test document.
The following procedure shows you how to create a Visual Studio 2005 Tools for Office SE add-in that uses this code, converted to Visual Basic.
Start Visual Studio and create an add-in project using the Word Add-in template.
Follow the instructions in Tip #2 and Tip #3 that walk you through creating a simple Office Fluent Ribbon customization. Add a button to the Ribbon, as shown in those tips. (Ensure the OnAction attribute in the XML content contains the text OnClick—that is the name of the button’s Click event handler.)
In the Ribbon1.vb file, inside the Ribbon1 class, copy the two VBA procedures CreateOutline and GetLevel. The code has few compiler errors and almost works as is.
In Visual Studio, blue underlines indicate compiler errors.
Add the following procedure to the Ribbon1 class.
Public Sub OnClick(ByVal control As Office.IRibbonControl)
CreateOutline()
End Sub
At this point, the GetLevel function compiles completely and the CreateOutline method contains only a few places that need modification.
The Option Strict setting (which you can apply per file or per project) affects the behavior of the compiler. It is off by default. If you add Option Strict On to the top of the file, or if Option Strict is the default setting in Visual Studio, more lines of code have compiler errors. If your code has many compiler errors, you probably have this setting turned on. To see if this setting is turned on, on the Tools menu click Options, expand Projects and Solutions, and click VB Defaults. For now, add Option Strict Off to the top of the code file. Later, we discuss how to get the code to work with Option Strict turned on.
Option Strict On
Option Strict Off
The following line fails because the compiler cannot find the ActiveDocument reference, which is global in VBA code written for Office Word.
docSource = ActiveDocument
To solve the problem, replace the code with the following.
docSource = Globals.ThisAddIn.Application.ActiveDocument
The same issue applies to the next error.
docOutline = Documents.Add
Replace that code with this fixed version.
docOutline = Globals.ThisAddIn.Application.Documents.Add
You might find it simpler to create a Word.Application variable and assign it the value Globals.ThisAddIn.Application. You can then use that variable rather than the full reference in the fixed code.
Globals.ThisAddIn.Application
The following fails because the Visual Basic compiler cannot determine which enumeration provides the wdRefTypeHeading value.
wdRefTypeHeading
astrHeadings = _
docSource.GetCrossReferenceItems(wdRefTypeHeading)
Because Office Word uses loosely typed values for its enumerations, you must resolve the reference. In contrast, Office Excel and most of the other 2007 Office system applications create typed enumerations that are easier to resolve. To resolve the reference in Office Word, search for it by its full name, wdRefTypeHeading, in the Object Browser. The Object Browser is available from the View menu, as shown in Figure 8. You can also use the Object Browser in the Visual Basic Editor in Office Word to resolve the reference.
In general, it is easier to find the full reference for enumerated values in VBA code written for Office Excel and Office PowerPoint. In Visual Studio, press the space bar immediately before the ambiguous value, and Visual Studio can usually find the correct enumeration, allowing you to select from a list of possible values.
Based on your findings in the Object Browser, replace the problematic code with the following.
astrHeadings = _
docSource.GetCrossReferenceItems( _
Word.WdReferenceType.wdRefTypeHeading)
You can shorten the namespace because the add-in template includes an Imports statement that defines Office Word as Microsoft.Office.Interop.Word.
The following line of code fails to compile for the same reason.
rng.Collapse(wdCollapseEnd)
Search for the enumerated value in the Object Browser, and after you find it, replace the problematic code with the following.
rng.Collapse(Word.WdCollapseDirection.wdCollapseEnd)
Save and run the project.
To test the conversion, create a blank document in Office Word and add text with headings that use the Heading 1, Heading 2, and Heading 3 styles. On the Add-Ins tab, click the button for your customization. The code should create an outline of your document. Close Office Word when you are finished.
Because Option Strict is off, converting VBA code to Visual Basic is relatively easy. Turning Option Strict on provides much better type safety and allows the compiler to determine code that has late-binding or loosely-typed conversions. In general, your code is safer if you turn Option Strict on. However, when you do, you have more work to do when you convert code.
Although we do not recommend using Option Strict On when you convert VBA code to Visual Basic (although we do recommend using Option Strict On when you write new code), it is instructional to convert the code with Option Strict turned on. The following procedure shows you how to make the code compile with Option Strict on.
Scroll to the top of the Ribbon1.vb file, and add the following as the first line of code in the file.
Option Strict On
You now see more compile errors in the code. In this example, the errors are almost all caused by the same problem: The astrHeadings variable is now defined as an Object. Although the variable is originally defined as a Variant, when you copied the code into Visual Basic, the code editor changed the definition. Because the variable is defined as an Object, later code cannot treat it as an array, which the code tries to do. To fix the problem, change the declaration of astrHeadings so that it is an array of strings.
astrHeadings
Dim astrHeadings As String()
Explicitly declaring the variable fixes some of the compile errors. The code can now retrieve the LBound and UBound values, although in Visual Basic, the LBound value is always 0. Now the call to GetCrossReferenceItems fails. The call fails because this method returns an Object, not an array of strings. To fix the problem, use the CType method to cast the result as the correct type. Change the call to GetCrossReferencesItems to the following.
astrHeadings = _
CType(docSource.GetCrossReferenceItems( _
Word.WdReferenceType.wdRefTypeHeading), String())
The remaining error occurs in the GetLevel function: the final line of code does not compile. If you rest the pointer on the code, the tip shown in Figure 9 displays, indicating that the code includes an implicit type conversion.
You can use the red icon near the error to display the Error Correction Assistant, as shown in Figure 10. Rest the pointer on the red icon, and when the arrow appears, click it. (You can also press SHIFT+ALT+F10 after you click the error.)
Click the link in the Error Correction Assistant, as shown in Figure 11. Visual Studio replaces the code with code that performs an explicit conversion.
GetLevel = CInt((intDiff / 2) + 1)
After all of that work, you might feel it is better to leave Option Strict off and, in general, we agree with you. When you convert VBA code to Visual Basic, it might be better to leave Option Strict off. But, when you write new code, turning Option Strict on helps you create better, safer code.
In this tip, we cannot cover all the issues that you need to consider when you copy VBA code to Visual Basic.
Exception Handling
Although On Error GoTo is still supported, its support is somewhat limited, and you cannot use this technique and structured exception handling in the same procedure. Learn how to use exceptions in the .NET Framework. Exceptions are safer and they perform better: On Error GoTo is slow, especially when you use it to handle situations that are not really errors. See Visual Basic .NET Internals for more information.
Variant Variables vs. Object Variables
Copying code from VBA into Visual Basic converts all Variant variables into Object variables for you, but using Object variables restricts how you can use your variables, and degrades performance. For more information, see Visual Basic .NET Internals. In general, replace Object variables with strongly-typed variables.
Variant
Object
Procedure Parameters
In VBA, procedure parameters are passed by reference by default: In Visual Basic, they are passed by value, unless you use the ByRef keyword.
Dates
In VBA, dates are stored internally as a serial value (the number of days since Dec 30, 1899) and time values are stored as a fraction of a day. In Visual Basic, DateTime values are stored completely differently.
Array Lower Bounds
In VBA, the lower bound of an array can be any value. In Visual Basic, the lower bound is always 0.
Examine the following references for more in depth coverage of all the differences between VBA and Visual Basic.
Visual Basic .NET Internals
describes many details. Find out when to use VBA constructs, and when to migrate to .NET Framework classes.
Convert VBA Code to Visual Basic When Migrating to Visual Studio 2005 Tools for Office lists problem areas, such as fully qualifying enumerations, differences in the way dates are handled, parameter passing (ByVal default instead of ByRef), and array bases (the lower bound cannot be changed).
Converting Code from VBA to Visual Basic .NET
Life Without On Error Goto Statements
Custom Task Panes, the Office Fluent Ribbon, and Reusing VBA Code in the 2007 Office System
Object Library Reference for the 2007 Microsoft Office System
summarizes the changes for each version of Microsoft Office since Microsoft Office 97 and also provides conceptual overviews and how-tos.
Free Book - Introducing Microsoft Visual Basic 2005 for Developers | http://msdn.microsoft.com/en-us/library/bb960898.aspx | crawl-002 | refinedweb | 6,324 | 55.95 |
Switching from C to C++
Some believe that learning C++ is easy and suggest learning C before learning C++, while other people disagree and believe that if your intention is to learn C++ that you are better off learning C++ directly rather than trying to learn C first. Switching from C to C++ can be both easy, as there are many similarities between the two languages, and hard, as there are many differences that require forgetting what you know and habits that you may have developed from programming in C.
Differences[edit]
- C++ uses new and delete operators for memory management while C uses library functions.
- C++ extends structs and unions, and also includes classes.
- C++ has templates while C does not.
- C++ has operator and function overloading.
- C++ requires type casts conversions but C is less strict and C++ also provides alternative type conversion semantics.
- typedef creates unique types in C++, and creates type aliasing in C.
Thinking in C++[edit]
If you are previously a C programmer and have decided to switch to C++, you should throw away some C concepts and get used to the C++/OO way of doing things:
- C programmers tend to write lots of global functions. If these functions needs an object to work, you should make them member functions of a class.
- You should group similar functions working with different types in a template.
- C programmers usually prefix their identifiers to prevent name conflict. However, this is not necessary in C++ as they can be put in a namespace.
- There are usually lots of pointers and type casts in C. However, you should use less pointers in favour of references and less type casts in favour of derived classes.
- Think everything as objects. For example, if you write a control program for a plane, you should create a class called Plane and create functions inside it. | https://en.wikiversity.org/wiki/Switching_from_C_to_C%2B%2B | CC-MAIN-2018-26 | refinedweb | 314 | 69.82 |
To create your own custom objects, you must define a sort of template, or
cookie cutter, called a class. You do so in Python using the class statement,
followed by the name of the class and a colon. Following this, the body of the
class definition contains the properties and methods that will be available for
all object instances that are based on this class.
class
Let's take all the functions that we've created so far and recast them as
methods of a DNA class. Then we'll see how to create DNA objects based on our
DNA class. While we could do all this from the Python shell, instead we will
place this code into a bio.py file and show how we can use this file from the
Python shell. The contents of our bio.py file, which Python calls a module,
look like this.
class DNA:
"""Class representing DNA as a string sequence."""
basecomplement = {'A': 'T', 'C': 'G', 'T': 'A', 'G': 'C'}
def __init__(self, s):
"""Create DNA instance initialized to string s."""
self.seq = s
def transcribe(self):
"""Return as rna string."""
return self.seq.replace('T', 'U')
def reverse(self):
"""Return dna string in reverse order."""
letters = list(self.seq)
letters.reverse()
return ''.join(letters)
def complement(self):
"""Return the complementary dna string."""
letters = list(self.seq)
letters = [self.basecomplement[base] for base in letters]
return ''.join(letters)
def reversecomplement(self):
"""Return the reverse complement of the dna string."""
letters = list(self.seq)
letters.reverse()
letters = [self.basecomplement[base] for base in letters]
return ''.join(letters)
def gc(self):
"""Return the percentage of dna composed of G+C."""
s = self.seq
gc = s.count('G') + s.count('C')
return gc * 100.0 / len(s)
def codons(self):
"""Return list of codons for the dna string."""
s = self.seq
end = len(s) - (len(s) % 3) - 1
codons = [s[i:i+3] for i in range(0, end, 3)]
return codons
Much of this should look familiar based on our existing functions. Class
definitions do add a few new elements that we need to cover. Let's look at how
to use this new class before exploring the extra details.
We create object instances by calling the class, much like we would
call a function. The first thing we need to do is make the Python shell aware
of this class definition. We do that by importing the DNA class definition
from our bio.py module. Then we create an instance of the DNA class, passing
in the initial string value. From that point on the object keeps track of
its own sequence value, and we simply call the methods that are defined for
that object.
>>> from bio import DNA
>>> dna1 = DNA('CGACAAGGATTAGTAGTTTAC')
>>> dna1.transcribe()
'CGACAAGGAUUAGUAGUUUAC'
>>> dna1.reverse()
'CATTTGATGATTAGGAACAGC'
>>> dna1.complement()
'GCTGTTCCTAATCATCAAATG'
>>> dna1.reversecomplement()
'GTAAACTACTAATCCTTGTCG'
>>> dna1.gc()
38.095238095238095
>>> dna1.codons()
['CGA', 'CAA', 'GGA', 'TTA', 'GTA', 'GTT', 'TAC']
Since a class acts as a kind of template that's used to create multiple
object instances, we need the ability, inside a class method, to refer to the
specific object instance on which the method is called. To accommodate this
need, Python automatically passes the object instance as the first argument to
each method. The convention in the Python community is to name that first
argument "self." That's why you see "self" as the first argument in all the
method definitions of our DNA class.
self
The other thing to note is that the __init__() method. Python calls this
specially named method when creating instances of the class. In our example,
DNA.__init__ expects to receive a string argument, which we then store as a
property of the object instance, self.seq.
__init__()
DNA.__init__
self.seq
We made one other change when we moved our functions into class methods. We
moved the basecomplement dictionary definition out of the complement() method
and into the class definition. As part of the class definition, the dictionary
is only created once, rather than each time the method is called. The
dictionary is shared by all instances of the class, and it can be used by more
than one method. This is in contrast to the seq property, for which each object
instance will have its own unique value.
basecomplement
complement()
seq
As you can see, classes provide a effective way to group related data and
functionality. Let's finish our shell session by creating a few more DNA
instances.
>>> dna2 = DNA('ACGGGAGGACGGGAAAATTACTAGCACCCGCATAGACTT')
>>> dna2.codons()
['ACG', 'GGA', 'GGA', 'CGG', 'GAA', 'AAT', 'TAC', 'TAG',
'CAC', 'CCG', 'CAT', 'AGA', 'CTT']
>>> dna3 = DNA(dna1.seq + dna2.seq)
>>> dna3.reversecomplement()
'AAGTCTATGCGGGTGCTAGTAATTTTCCCGTCCTCCCGTGTAAACTACTAATCCTTGTCG'
>>> dna4 = DNA(dna3.reversecomplement())
>>> dna4.codons()
['AAG', 'TCT', 'ATG', 'CGG', 'GTG', 'CTA', 'GTA', 'ATT',
'TTC', 'CCG', 'TCC', 'TCC', 'CGT', 'GTA', 'AAC', 'TAC',
'TAA', 'TCC', 'TTG', 'TCG']
Even with this rudimentary class definition, manipulated from the Python
shell, we can start to see Python's potential for analyzing biological data in
a clear, coherent fashion, with a minimum of syntactic overhead.
Python is a popular, open source programming language with much to offer the
bioinformatics community. At the same time, Python came late to the
bioinformatics party and may never rise to level of popularity of Perl. Choice
is always a good thing, though, and Python offers a viable, reliable option for
biologists and professional programmers alike. We hope this article gives you a
reason to take a closer look at Python.
If you like what you've seen of Python, here are some additional resources
to explore. the Python DevCenter.
Sponsored by:
© 2014, O’Reilly Media, Inc.
(707) 827-7019
(800) 889-8969
All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. | http://www.onlamp.com/pub/a/python/2002/10/17/biopython.html?page=5 | CC-MAIN-2014-15 | refinedweb | 947 | 66.13 |
> vXworksBSPfors3c44b0.rar > dataSegPad.c
/* dataSegPad.c - padding for beginning of data segment */ /* Copyright 1984-1991 Wind River Systems, Inc. */ #include "copyright_wrs.h" /* modification history -------------------- 01c,19oct92,jcf change to include when INCLUDE_MMU_FULL defined. 01b,28jul92,rdc changed PAGE_SIZE to VM_PAGE_SIZE. 01a,21jul92,rdc written. */ /* DESCRIPTION This module conditionally creates a data structure the size of one page; it is explicility listed as the first module on the load line when VxWorks is linked to insure that this data structure is the first item in the data segment. This mechanism is needed to insure that the data segment does not overlap a page that is occupied by the text segment; when text segment protection is turned on, all pages that contain text are write protected. This insures that the data segment does not lie in a page that has been write protected. If text segment protection has not been included, this module compiles into a null object module. In an embedded system, this mechanism may not be needed if the loader explicitly places the data segment in a section of memory seperate from the text segment. */ #include "vxWorks.h" #include "config.h" #ifdef INCLUDE_MMU_FULL /* bootroms will not ref dataSegPad.o */ char dataSegPad [VM_PAGE_SIZE] = {1}; #endif /* INCLUDE_MMU_FULL */ | http://read.pudn.com/downloads54/sourcecode/unix_linux/187530/vxwork44b0/all/dataSegPad.c__.htm | crawl-002 | refinedweb | 205 | 53.81 |
The pros and cons Loose Coupling and Tight Coupling.
I have read many articles about loose coupling and tight coupling. I have realized that some programmers have been discussing the differences between loose coupling and tight coupling. I want to talk about the situation from my point of view.
Short Introduction Loose and Tight Coupling
Loose Coupling means reducing dependencies of a class that use a different class directly. In tight coupling, classes and objects are dependent on one another. In general, tight coupling is usually bad because it reduces flexibility and re-usability of code and it makes changes much more difficult and impedes testability etc.
Tight Coupling
A every programmer or there is a chance of overlooking changes. But each set of loosely coupled objects are not dependent on each other. (Stackoverfow-Jom George)
CodenamespaceTightCoupling{ public class Remote { private Television Tv { get; set;} protected Remote() { Tv = new Television(); }
static Remote() { _remoteController = new Remote(); } static Remote _remoteController; public static Remote Control { get { return _remoteController; } }
public void RunTv() { Tv.Start(); } }}
Tight Coupling creates some difficulties. Here, the task of the control object, the object needs to be able to television, the television remote control is dependent on the other phrase. So, what's the harm of the following dependencies:
TV without a remote control does not work.
TV changes the control directly affected by this change.
The Control can only control the TV, cannot control other devices.
Loose Coupling
Loose coupling is a design goal that seeks to reduce the inter-dependencies between components of a system with the goal of reducing the risk that changes in one component will require changes in any other component. Loose coupling is a much more generic concept intended to increase the flexibility of a system, make it more maintainable, and makes the entire framework more "stable".
Codepublic interface IRemote { void Run(); }public class Television : IRemote { protected Television() {
}
static Television() { _television = new Television(); } private static Television _television; public static Television Instance { get { return _television; } }
public void Run() { Console.WriteLine("Television is started!"); } }
We need a managing class that will produce an instance. The instance is generated from the implemented class. The Management Class constructor needs an interface which implements to any Class.
Code
public class Remote
{
IRemote _remote;
public Remote(IRemote remote) { _remote = remote; } public void Run() { _remote.Run(); } }
Usageclass Program{ static void Main(string[] args) {
Remote remote = new Remote(Television.Instance); remote.Run(); Console.Read(); } }Advantages
It will save you a lot of time for any project that isn't trivially small, where I define trivially small as less than a couple thousand lines of code (depending on the language). The reason is that once you get past super small projects, each change or update gets harder the more tightly coupled it is. Being loosely coupled enables you to keep moving forward, adding features, fixing bugs, etc.
At a certain point I think any program becomes a nightmare to maintain, update and add on to. The more loosely coupled the design is, the further that point is delayed. If it's tightly coupled, maybe after about 10,000 lines of code it becomes unmaintainable; adding features becomes impossible without essentially rewriting from scratch.
Being loosely coupled allows it to grow to 1,000,000 - 10,000,000 lines of code while still being able to make changes and add new features within a reasonable amount of time. These numbers aren't meant to be taken literally as they're just made up, but to provide a sense of where it becomes helpful. If you never need to update the program and it's fairly simple then sure, it's fine to be tightly coupled. It's even okay to start that way but understand that when it's time to separate stuff out, but you still need experience writing loosely coupled code to know at what point it becomes beneficial.; (From Stackoverflow-Davy8)
It improves testability.
It helps you follow the GOF principle of Program to Interfaces, not implementations.
The benefit is that it's much easier to swap other pieces of code/modules/objects/components when the pieces aren't dependent on one another.
It's highly changeable. One module does not break other modules in unpredictable ways
Summary
As with all OO design, there are trade-offs you have to make; is it more important for you to have highly modular code that is easy to swap in and out? Or is it more important to have easily understandable code that is simpler? You'll have to decide that.
References
View All
View All | https://www.c-sharpcorner.com/uploadfile/yusufkaratoprak/difference-between-loose-coupling-and-tight-coupling/ | CC-MAIN-2022-40 | refinedweb | 763 | 53.92 |
In this ESP32 tutorial, we will check how to get the Bluetooth address of the device, using the Arduino core. The tests of this ESP32 tutorial were performed using a DFRobot’s ESP-WROOM-32 device integrated in a ESP32 FireBeetle board.
Introduction
In this ESP32 tutorial, we will check how to get the Bluetooth address of the device, using the Arduino core.
The Bluetooth Device Address (sometimes referred as BD_ADDR) is a unique 6 byte identifier assigned to each Bluetooth device by the manufacturer [1].
One important thing to mentioned is that the 3 most significant bytes (upper part of the address) can be used to determine the manufacturer of the device [1].
Regarding the code, we will be using the IDF Bluetooth API, which provides to us a function to retrieve the mention address.
The tests of this ESP32 tutorial were performed using a DFRobot’s ESP-WROOM-32 device integrated in a ESP32 FireBeetle board.
If you prefer, you can check a video tutorial on how to obtain the device Bluetooth address on my Youtube channel. The approach used in this video is slightly different since we take advantage of the BluetoothSerial.h library to initialize the Bluetooth stack.
The code
We will start our code by including the libraries needed to both initialize the Bluetooth stack (esp_bt_main.h) and to have access to the function that allows to retrieve the device address (esp_bt_device.h).
#include "esp_bt_main.h" #include "esp_bt_device.h"
As we have been doing in the previous tutorials, we will create a function to initialize both the Bluetooth controller and host stacks. Although in this tutorial we are not going to actually perform any Bluetooth communication, we need to have the Bluedroid host stack initialized and enabled to be able to retrieve the device address [2].
So, our Bluetooth init function will be similar to what we have been doing in the previous tutorials. We first initialize and enable the controller stack with a call to the btStart function and then we initialize the Bluedroid stack with a call to the esp_bluedroid_init function. After that, we call the esp_bluedroid_enable to enable the Bluedroid stack.
You can check this init function below, already with all the mentioned calls and the error checking.; } }
We will follow the same approach and encapsulate the printing of the device address in a function, which we will call printDeviceAddress.
void printDeviceAddress() { // Print code here }
In order to get the device address, we simply need to call the esp_bt_dev_get_address function.
This function takes no arguments and returns the six bytes of the Bluetooth device address. In case the Bluedroid stack has not been initialized, it will return NULL [2].
Note that the six bytes will be returned as a pointer to an array of uint8_t, which we will store on a variable.
const uint8_t* point = esp_bt_dev_get_address();
As mentioned, this array will have 6 elements which we can iterate and print one by one.
for (int i = 0; i < 6; i++) { // Format and print the bytes }.
Note that in the standard format the address is displayed with each byte separated by colons [1], which is also the format we are going to use.
for (int i = 0; i < 6; i++) { char str[3]; sprintf(str, "%02X", (int)point[i]); Serial.print(str); if (i < 5){ Serial.print(":"); } }
Now that we finished our printing function, we will move on to the Arduino setup. There, we will initialize a serial connection to print the results of our program.
Followed by that, we will call the Bluetooth initialization function and the address printing function.
void setup() { Serial.begin(115200); initBluetooth(); printDeviceAddress(); }
The final source code can be seen below.
#include "esp_bt_main.h" #include "esp_bt_device.h"(); printDeviceAddress(); } void loop() {}
Testing the code
To test the code, simply compile it and upload it. When it finishes, open the Arduino IDE Serial Monitor and check the string that gets printed. You should have a result similar to figure 1, which shows the device address in the hexadecimal format we specified.
Figure 1 – Printing the Bluetooth address of the ESP32.
As mentioned in the introductory section, we can use this address to lookup the vendor of the Bluetooth device. You can use this website to make the lookup. As shown in figure 2, the device address shown before has Espressif (the company that makes the ESP32) as vendor.
Figure 2 – Bluetooth device address vendor lookup.
References
[1]
[2]
Related posts
-
7 Replies to “ESP32 Arduino: Getting the Bluetooth Device Address”
hey, nice work. i have been looking for something like this. but i have this one question: can we write the same code with an HC05 module of bluetooth and arduino UNO or the firebeetle board you have used is necessary? hope to get a reply at the earliest
LikeLiked by 1 person
Hi!
Thanks for the feedback 🙂
This code shown here will only work on a ESP32-based board.
The ESP32 has integrated Bluetooth and device specific APIs, even though we are using the Arduino core to program it.
The functions used here are not available for the Arduino uno. Think of the functions shown here as part of a library that only works for the ESP32.
When it comes to a system such as an Arduino uno + HC05, the working principle is completely different.
Basically, you have two distinct devices. Thus, your Arduino Uno has to talk with the HC05 via serial, and the HC05 will act as a bridge that will send/receive the data over Bluetooth.
Nonetheless, there should be plenty of tutorials for the HC05 and HC06 around the web. When I was more actively using Arduino boards, I recall these cheap devices being very popular to bring Bluetooth functionalities to the Arduino boards.
Hope this clarifies 🙂
Best regards,
Nuno Santos | https://techtutorialsx.com/2018/03/09/esp32-arduino-getting-the-bluetooth-device-address/ | CC-MAIN-2019-04 | refinedweb | 966 | 63.19 |
the mod_ssl module has the following code implmented:
static unsigned long ssl_util_thr;
#else
return (unsigned long) apr_os_thread_current();
#endif
}
And when the following code is called inside of openssl 0.9.8e:
case DLL_THREAD_DETACH:
ERR_remove_state(0);
break;
This causes the CRYPTO_set_id_callback() to be called. The problem is that
apr_os_thread_current() calls DuplicateHandle for the thread and this causes the
thread HANDLE to leak for the detaching thread.
This can be reproduced by having an apache module that that does the following
with the mod_ssl module loaded:
static void* APR_THREAD_FUNC testThread2(apr_thread_t *thd,void* input)
{
printf("Sample module: Another thread being launching -
%d\n",GetCurrentThreadId());
apr_thread_exit(thd,APR_SUCCESS);
return NULL;
}
static void* APR_THREAD_FUNC testThread(apr_thread_t *thd,void* input)
{
apr_thread_t *thd_arr;
apr_pool_t *mp;
apr_threadattr_t *thd_attr;
apr_pool_create(&mp, NULL);
apr_threadattr_create (&thd_attr, mp);
apr_threadattr_detach_set (thd_attr, 1);
while(1)
{
printf("Sample module, launching thread.\n");
//_beginthread(testThread2,0,NULL);
apr_thread_create(&thd_arr, thd_attr, testThread2, NULL, mp);
Sleep(5000);
}
apr_thread_exit(thd,APR_SUCCESS);
return NULL;
}
static int helloworld_handler(request_rec *r) {
apr_thread_t *thd_arr;
apr_pool_t *mp;
apr_threadattr_t *thd_attr;
/*...starting threads...", r);
apr_pool_create(&mp, NULL);
apr_threadattr_create (&thd_attr, mp);
apr_threadattr_detach_set (thd_attr, 1);
apr_thread_create(&thd_arr, thd_attr, testThread, NULL, mp);
/* we return OK to indicate that we have successfully processed
* the request. No further processing is required.
*/
return OK;
}
Calling apr_os_thread_current() should not cause a persistent memory leak; if it
does, it's an APR bug. (I'm not convinced it does, from reading the Win32 code;
it looks like a once-per-thread allocation, and so harmless?)
Calling apr_os_thread_current() causes a duplication of the Handle, so when the
thread detaches that handle turns in to a dangling pointer.
The problem is the mod_ssl code overwrites the default behavior for
ssl_util_thr_id which, when called in the case mentioned, will leak the handle.
It is not limited to APR thread creation/destruction, it can be any method you
choose to create/destroy threads and the leak will still occur. If you use the
standard _beginthread API or CreateThread API, the same thing occurs. It's
because the thread is detaching and there's nothing to clean up the duplicate
handle after it's detached.
There is a race condition between apr_thread_create() and dummy_worker(). dummy_worker() could start and proceed past the access of thd->td before apr_thread_create() has had a chance to set (*new)->td. This sets the TLS tls_apr_thread value to NULL. So when the created thread calls apr_os_thread_current(), the current thread's handle is duplicated and placed in the tls_apr_thread TLS slot. The tls_apr_thread recorded HANDLE is never closed.
The race condition and when the thread created as detached (attr->detach) as described in Comment 1 and Comment 2, leak a HANDLE if apr_os_thread_current() is called. | https://bz.apache.org/bugzilla/show_bug.cgi?id=42728 | CC-MAIN-2021-21 | refinedweb | 440 | 51.07 |
In article <E01E22F11C60D11183FC0000F81F9D1D5152AC@gensym-nt2.gensym.com> you wrote:
> On Monday, March 02, 1998 11:51 AM, Ralf S. Engelschall wrote:
>> In article <9803021509.AA12325@gensym1.gensym.com> you wrote:
>
> Developers should check the output of the script against the repository when
> they update to avoid letting things Apache didn't define slip in.
I'll add a message to Makefile.tmpl to make this clear.
>> >[...]
>> > 00002020 T start
>> >[...]
> Is it good, bad, or whatever that start is being redefined as AP_start?
Hmmm... I don't know what 'start' actually is. Seems it is together with 'end'
one of the standard compiler symbols, because I found them under Solaris, too.
I've added it to the exclusion list. BUT, nevertheless it doesn't make
problems or hurts something. Because as long as we do not use this symbol in
our sources it doesn' hurt is.
>[...]
> (a) Right now there are entries of the form:
> #define ap_md5 AP_ap_md5
> I'd lean toward removing those.
Because this is a double definition, you think? Hmmm... yes it is but on the
other hand when we exclude those this leads to a total confusion, for instance
when you debug such a httpd. Because then you have to think about twice: If
its ap_ then its in there exactly with this name and if not ap_ then AP_..
Hmmm... not nice. Instead we should consider the HIDE feature to be used only
in those cases where you really have to link Apache with a third-party library
and get namespace conflicts. Then HIDE=yes should be enabled to easily resolve
the conflict.
> (b) It wouldn't be hard to make the mistake of getting:
> #define AP_ind AP_AP_ind
> by running the thing twice with the facility turned on.
> Both of these appears easy to avoid.
No, it is hard because my helpers/UpdateHide script checks for this situation.
It first strips off any already existing AP_ prefix.
>[...]
>> > This adds perl to the tools required of the Apache
>> > developer.
>[...]
>.
While you point is ok in general, it doesn't apply to the HIDE feature.
Because first Perl here is only required for the developers and not for end
users and secondly even when the end user would need Perl, it already needs it
for other existing scripts in support/.
Greetings,
Ralf S. Engelschall
rse@engelschall.com | http://mail-archives.apache.org/mod_mbox/httpd-dev/199803.mbox/%3C199803031206.NAA07170@en1.engelschall.com%3E | CC-MAIN-2017-17 | refinedweb | 389 | 74.59 |
14 Mar 10:45 2013
Re: [X2Go-USer] X2Go Client Usage page on the wiki
Kjell Otto <otto.kjell@...>
2013-03-14 09:45:43 GMT
2013-03-14 09:45:43 GMT
Hey Mike, unfortunately I'm unable to delete them either, thats what I would have wanted to do. Maybe you can delete them with more permissions? Greetings, Kjellski 2013/3/14 Mike Gabriel <mike.gabriel@...>: > Hi Otto, > > > On Mi 13 Mär 2013 23:11:55 CET Kjell Otto wrote: > >> I've uploaded pictures using the media manager. >> Unfortunately I've used the wrong namespace initially, >> and I've not the rights to move or delete them. >> >> My bad, I'll just write the article until the pictures are moved. >> >> Greetings, >> Kjellski > > > I cannot move the images either (apart from doing mv in the file system). > Please delete and re-upload. > > If you know an extension for the media manager that gives file manager > functionality to it, please let us know. > >@..., > > freeBusy: > > > _______________________________________________ > X2Go-User mailing list > X2Go-User@... > | http://permalink.gmane.org/gmane.linux.terminal-server.x2go.user/1157 | CC-MAIN-2016-26 | refinedweb | 171 | 66.33 |
Description
Solution
A very interesting question.
When you see the palindrome substring, you first think of the manacher algorithm. emm... But what to do after writing manacher?
We find that finding intersecting palindrome substrings is very troublesome, so the direct wave is positive and difficult, and the disjoint is subtracted from the total number of palindrome substrings.
Next, consider how to find disjoint palindrome substrings.
We open two arrays \ (f_i \), \ (g_i \)\ (f_i \) indicates how many palindrome strings start with \ (I \), \ (g_i \) indicates how many palindrome strings end with \ (I \).
When we see the \ (\ color{blue} {prefix and} \) in the tag, we prefix \ (g_i \) and save it in \ (sum_i \), then \ (sum_i \) represents the number of palindrome substrings ending with \ (I \) and previous points. We find that the number of disjoint palindrome substrings is:
Subtract from the total.
So what about \ (f_i \) and \ (g_i \)?
We found that the \ (\ color{blue} {differential} \) in the label has not been used. The label is really easy to use. Consider using differential.
We have used the manacher algorithm to calculate the longest palindrome radius when each point is the center of the palindrome string, and set it as \ (p_i \) (\ (p_i \) is the palindrome radius of the new string after the expansion of the original string, and the length is the length of the palindrome string of the original string. If you don't understand it, you can learn manacher, which will not be repeated here).
We found that a radius \ (p_i \) will form many palindrome strings, which are:
That is, we have to + 1 for both \ (f_{i - p_i + 1} \sim f_i \) and \ (g_i \sim g_{i + p_i - 1} \).
At this time, we can solve it by difference, that is \ (f_{i - p_i + 1} + + \), \ (f_{i + 1} -- \), and for \ (g_{i} + + \), \ (g_{i + p_i} -- \).
It's OK to cycle through the statistical answer for the last time. It should be noted that now we have expanded the original string, so the increase of the cycle is 2.
Code
#include <iostream> #include <cstdio> #include <cstring> #include <algorithm> #define ll long long using namespace std; const ll N = 4e6 + 10; const ll mod = 51123987; ll n, ans, tot, sum; char s[N], a[N]; ll f[N], g[N], p[N]; inline void manacher(){ s[0] = '*', s[(n << 1) + 1] = '#'; for(ll i = 1; i <= n; ++i) s[(i << 1) - 1] = '#', s[i << 1] = a[i]; n = (n << 1) + 1; ll mx = 0, id = 0; for(ll i = 1; i <= n; ++i){ if(i < mx) p[i] = min(mx - i, p[(id << 1) - i]); else p[i] = 1; while(i - p[i] >= 1 && i + p[i] <= n && s[i - p[i]] == s[i + p[i]]) p[i]++; if(i + p[i] > mx) mx = i + p[i], id = i; tot = (tot + (p[i] >> 1)) % mod; } } signed main(){ scanf("%lld%s", &n, a + 1); manacher(); for(ll i = 1; i <= n; ++i){ f[i - p[i] + 1]++, f[i + 1]--; g[i]++, g[i + p[i]]--; } for(ll i = 1; i <= n; ++i) f[i] += f[i - 1], g[i] += g[i - 1]; ans = tot * (tot - 1) / 2 % mod; for(ll i = 2; i <= n - 2; i += 2){ sum = (sum + g[i]) % mod; ans = (ans - sum * f[i + 2] % mod + mod) % mod; } printf("%lld\n", ans); return 0; } | https://programmer.group/cf17e-palisation-problem-solution.html | CC-MAIN-2021-49 | refinedweb | 548 | 70.67 |
Ticket #5095 (closed Bugs: fixed)
incorrect results from hypergeometric pdf
Description
#include <boost/math/distributions/hypergeometric.hpp> #include <boost/math/policies/policy.hpp> #include <iostream>
using namespace std; using namespace boost;
int main() {
unsigned N = 16086184; unsigned n = 256004; unsigned Q = 251138; math::hypergeometric_distribution<double> hyper(n, Q, N); cout << math::pdf<double>(hyper, 4000) << " " << math::pdf<double>(hyper, 4001) << " " << math::pdf<double>(hyper, 4002) << "\n";
return 0;
}
Output: 0.00640003 1.11519e-09 0.00638443 The value for 4001 is incorrect (according to). In fact, every value where k is odd appears to be incorrect.
Attachments
Change History
comment:1 Changed 5 years ago by David Koes <dkoes@…>
As a workaround, if the lgamma backup version of hypergeometric_pdf_lanczos_imp is used instead, correct results are obtained.
In /usr/local/include:2 Changed 5 years ago by David Koes <dkoes@…>
Properly formatted diff of:3 Changed 5 years ago by johnmaddock
I've been unable to reproduce here either with VC10 or with gcc-4.4 on Ubuntu Linux, what compiler/platform are you on?
The output I see from your program is: 0.00640003 0.00639305 0.00638443 which I believe agrees with the online calculator.
comment:4 Changed 5 years ago by anonymous
gcc version 4.4.3 (Ubuntu 4.4.3-4ubuntu5) on Ubuntu 10.4
Also, on Ubuntu 10.10 with boost version 1.40 and gcc version 4.4.5 (Ubuntu/Linaro? 4.4.4-14ubuntu5) and OS X (10.6) with boost version 1.41 and gcc version 4.2.1 (Apple Inc. build 5664).
Perhaps for some reason your installation is already falling back on the lgamma version of hypergeometric_pdf_lanczos_imp?
comment:5 Changed 5 years ago by anonymous
No it's calling for example:
hypergeometric_pdf_lanczos_imp<long double,boost::math::lanczos::lanczos13m53,boost::math::policies::policy<boost::math::policies::promote_float<0>,boost::math::policies::promote_double<0>> >(double formal, unsigned int x, unsigned int r, unsigned int n, unsigned int N, double formal, double formal);
I notice that you're using an out of date Boost distribution, can you:
- Please try the last release and if that still fails, then:
- Let me have the program output when BOOST_MATH_INSTRUMENT is #defined when building?
Thanks, John.
Changed 5 years ago by David Koes <dkoes@…>
program output with BOOST_MATH_INSTRUMENT defined
comment:6 Changed 5 years ago by David Koes <dkoes@…>
The Ubuntu 10.04 box has boost 1.45 on it. I also tried it on two other machines with the version of boost that they already have. I have also grabbed and built the svn trunk on the OS X machine, and tried removing /usr/local/include/boost and reinstalling on the 10.4 machine. I still get the failure.
I have attached the output with BOOST_MATH_INSTRUMENT defined.
Thanks.
comment:7 follow-up: ↓ 9 Changed 5 years ago by anonymous
Thanks for trying that, I really appreciate the help you're putting in to get to the bottom of this, unfortunately I still can't see what's gone wrong (other than something has), I'm also confused because there's really nothing in the code that should rely on the evenness of the random variable!
I'm attaching an updated hypergeometric_pdf.hpp with a lot more instrumentation, can I get you to try again? Just the failing case this time - otherwise the output is going to be huge! :(
Many thanks,
Still confused yours, John.
Changed 5 years ago by anonymous
- attachment hypergeometric_pdf.hpp
added
comment:8 Changed 5 years ago by anonymous
Forgot to say - if the program output is too large to attach here - zip it up and mail me direct at john at johnmaddock.co.uk
Thanks! John.
Changed 5 years ago by David Koes <dkoes@…>
- attachment BMIdumpmore.gz
added
even more instrumentation (bad value only, gzipped)
comment:9 in reply to: ↑ 7 Changed 5 years ago by David Koes <dkoes@…>
Attached. Here are some interesting results that may partially explain the reproducibility problem:
dkoes@quasar:~/tmp$ g++ hyper.cpp; ./a.out 1.11519e-09 dkoes@quasar:~/tmp$ g++ -m32 hyper.cpp; ./a.out 0.00639305 dkoes@quasar:~/tmp$ g++ -mfpmath=387 hyper.cpp; ./a.out 0.00639305 dkoes@quasar:~/tmp$ g++ -mfpmath=sse hyper.cpp; ./a.out 1.11519e-09 dkoes@quasar:~/tmp$ g++ -mno-sse hyper.cpp; ./a.out In file included from /usr/local/include/boost/config/no_tr1/cmath.hpp:21, from /usr/local/include/boost/math/policies/error_handling.hpp:15, from /usr/local/include/boost/math/distributions/detail/common_error_handling.hpp:12, from /usr/local/include/boost/math/distributions/hypergeometric.hpp:12, from hyper.cpp:1: /usr/include/c++/4.4/cmath: In function ‘double std::abs(double)’: /usr/include/c++/4.4/cmath:94: error: SSE register return with SSE disabled 1.11519e-09 dkoes@quasar:~/tmp$ g++ -mno-sse2 hyper.cpp; ./a.out 1.11519e-09 dkoes@quasar:~/tmp$ g++ -mno-sse3 hyper.cpp; ./a.out 1.11519e-09 dkoes@quasar:~/tmp$ g++ -mno-sse4 hyper.cpp; ./a.out 1.11519e-09 dkoes@quasar:~/tmp$ g++ -DBOOST_MATH_SPECIAL_FUNCTIONS_LANCZOS_SSE2 hyper.cpp; ./a.out 1.11519e-09
comment:10 Changed 5 years ago by anonymous
Thanks for that, some of the values in the "exponents" table are getting truncated, I'm attaching an updated header with some extra typecasts present to try and prevent this, can you give this a try?
Thanks, John.
Changed 5 years ago by anonymous
- attachment hypergeometric_pdf.2.hpp
added
comment:11 Changed 5 years ago by David Koes <dkoes@…>
Bingo. That fixed it. Thanks!
comment:12 Changed 5 years ago by johnmaddock
comment:13 Changed 5 years ago by anonymous
- Status changed from new to closed
- Resolution set to fixed
example | https://svn.boost.org/trac/boost/ticket/5095 | CC-MAIN-2016-07 | refinedweb | 943 | 50.43 |
import "github.com/ipfs/go-filestore"
Package filestore implements a Blockstore which is able to read certain blocks of data directly from its original location in the filesystem.
In a Filestore, object leaves are stored as FilestoreNodes. FilestoreNodes include a filesystem path and an offset, allowing a Blockstore dealing with such blocks to avoid storing the whole contents and reading them from their filesystem location instead.
filestore.go fsrefstore.go util.go
FilestorePrefix identifies the key prefix for FileManager blocks.
IsURL returns true if the string represents a valid URL that the urlstore can handle. More specifically it returns true if a string begins with 'http://' or 'https://'.
ListAll returns a function as an iterator which, once invoked, returns one by one each block in the Filestore's FileManager. ListAll does not verify that the references are valid or whether the raw data is accessible. See VerifyAll().
VerifyAll returns a function as an iterator which, once invoked, returns one by one each block in the Filestore's FileManager. VerifyAll checks that the reference is valid and that the block data can be read.
CorruptReferenceError implements the error interface. It is used to indicate that the block contents pointed by the referencing blocks cannot be retrieved (i.e. the file is not found, or the data changed as it was being read).
func (c CorruptReferenceError) Error() string
Error() returns the error message in the CorruptReferenceError as a string.
type FileManager struct { AllowFiles bool AllowUrls bool // contains filtered or unexported fields }
FileManager is a blockstore implementation which stores special blocks FilestoreNode type. These nodes only contain a reference to the actual location of the block data in the filesystem (a path and an offset).
func NewFileManager(ds ds.Batching, root string) *FileManager
NewFileManager initializes a new file manager with the given datastore and root. All FilestoreNodes paths are relative to the root path given here, which is prepended for any operations.
AllKeysChan returns a channel from which to read the keys stored in the FileManager. If the given context is cancelled the channel will be closed.
func (f *FileManager) DeleteBlock(c cid.Cid) error
DeleteBlock deletes the reference-block from the underlying datastore. It does not touch the referenced data.
Get reads a block from the datastore. Reading a block is done in two steps: the first step retrieves the reference block from the datastore. The second step uses the stored path and offsets to read the raw block data directly from disk.
GetSize gets the size of the block from the datastore.
This method may successfully return the size even if returning the block would fail because the associated file is no longer available.
Has returns if the FileManager is storing a block reference. It does not validate the data, nor checks if the reference is valid.
func (f *FileManager) Put(b *posinfo.FilestoreNode) error
Put adds a new reference block to the FileManager. It does not check that the reference is valid.
func (f *FileManager) PutMany(bs []*posinfo.FilestoreNode) error
PutMany is like Put() but takes a slice of blocks instead, allowing it to create a batch transaction.
Filestore implements a Blockstore by combining a standard Blockstore to store regular blocks and a special Blockstore called FileManager to store blocks which data exists in an external file.
func NewFilestore(bs blockstore.Blockstore, fm *FileManager) *Filestore
NewFilestore creates one using the given Blockstore and FileManager.
AllKeysChan returns a channel from which to read the keys stored in the blockstore. If the given context is cancelled the channel will be closed.
DeleteBlock deletes the block with the given key from the blockstore. As expected, in the case of FileManager blocks, only the reference is deleted, not its contents. It may return ErrNotFound when the block is not stored.
func (f *Filestore) FileManager() *FileManager
FileManager returns the FileManager in Filestore.
Get retrieves the block with the given Cid. It may return ErrNotFound when the block is not stored.
GetSize returns the size of the requested block. It may return ErrNotFound when the block is not stored.
Has returns true if the block with the given Cid is stored in the Filestore.
HashOnRead calls blockstore.HashOnRead.
func (f *Filestore) MainBlockstore() blockstore.Blockstore
MainBlockstore returns the standard Blockstore in the Filestore.
Put stores a block in the Filestore. For blocks of underlying type FilestoreNode, the operation is delegated to the FileManager, while the rest of blocks are handled by the regular blockstore.
PutMany is like Put(), but takes a slice of blocks, allowing the underlying blockstore to perform batch transactions.
type ListRes struct { Status Status ErrorMsg string Key cid.Cid FilePath string Offset uint64 Size uint64 }
ListRes wraps the response of the List*() functions, which allows to obtain and verify blocks stored by the FileManager of a Filestore. It includes information about the referenced block.
List fetches the block with the given key from the Filemanager of the given Filestore and returns a ListRes object with the information. List does not verify that the reference is valid or whether the raw data is accesible. See Verify().
Verify fetches the block with the given key from the Filemanager of the given Filestore and returns a ListRes object with the information. Verify makes sure that the reference is valid and the block data can be read.
FormatLong returns a human readable string for a ListRes object
Status is used to identify the state of the block data referenced by a FilestoreNode. Among other places, it is used by CorruptReferenceError.
const ( StatusOk Status = 0 StatusFileError Status = 10 // Backing File Error StatusFileNotFound Status = 11 // Backing File Not Found StatusFileChanged Status = 12 // Contents of the file changed StatusOtherError Status = 20 // Internal Error, likely corrupt entry StatusKeyNotFound Status = 30 )
These are the supported Status codes.
Format returns the status formatted as a string with leading 0s.
String provides a human-readable representation for Status codes.
Package filestore imports 19 packages (graph) and is imported by 20 packages. Updated 2019-11-12. Refresh now. Tools for package owners. | https://godoc.org/github.com/ipfs/go-filestore | CC-MAIN-2019-47 | refinedweb | 996 | 57.47 |
oop in python - What's the difference between a method and a function?
Can someone provide a simple explanation of methods vs. functions in OOP context?.
As you can see you can call a function anywhere but if you want to call a method either you have to pass a new object of the same type as the class the method is declared (Class.method(object)) or you have to invoke the method inside the object (object.Method()), at least in python.
Think of methods as things only one entity can do, so if you have a Dog class it would make sense to have a bark function only inside that class and that would be a method, if you have also a Person class it could make sense to write a function "feed" for that doesn't belong to any class since both humans and dogs can be fed and you could call that a function since it does not belong to any class in particular.
If you feel like reading here is "My introduction to OO methods"
The idea behind Object Oriented paradigm is to "threat" the software is composed of .. well "objects". Objects in real world have properties, for instance if you have an Employee, the employee has a name, an employee id, a position, he belongs to a department etc. etc.
The object also know how to deal with its attributes and perform some operations on them. Let say if we want to know what an employee is doing right now we would ask him.
employe whatAreYouDoing.
That "whatAreYouDoing" is a "message" sent to the object. The object knows how to answer to that questions, it is said it has a "method" to resolve the question.
So, the way objects have to expose its behavior are called methods. Methods thus are the artifact object have to "do" something.
Other possible methods are
employee whatIsYourName employee whatIsYourDepartmentsName
etc.
Functions in the other hand are ways a programming language has to compute some data, for instance you might have the function addValues( 8 , 8 ) that returns 16
// pseudo-code function addValues( int x, int y ) return x + y // call it result = addValues( 8,8 ) print result // output is 16...
Since first popular programming languages ( such as fortran, c, pascal ) didn't cover the OO paradigm, they only call to these artifacts "functions".
for instance the previous function in C would be:
int addValues( int x, int y ) { return x + y; }
It is not "natural" to say an object has a "function" to perform some action, because functions are more related to mathematical stuff while an Employee has little mathematic on it, but you can have methods that do exactly the same as functions, for instance in Java this would be the equivalent addValues function.
public static int addValues( int x, int y ) { return x + y; }
Looks familiar? That´s because Java have its roots on C++ and C++ on C.
At the end is just a concept, in implementation they might look the same, but in the OO documentation these are called method.
Here´s an example of the previously Employee object in Java.
public class Employee { Department department; String name; public String whatsYourName(){ return this.name; } public String whatsYourDeparmentsName(){ return this.department.name(); } public String whatAreYouDoing(){ return "nothing"; } // Ignore the following, only set here for completness public Employee( String name ) { this.name = name; } } // Usage sample. Employee employee = new Employee( "John" ); // Creates an employee called John // If I want to display what is this employee doing I could use its methods. // to know it. String name = employee.whatIsYourName(): String doingWhat = employee.whatAreYouDoint(); // Print the info to the console. System.out.printf("Employee %s is doing: %s", name, doingWhat ); Output: Employee John is doing nothing.
The difference then, is on the "domain" where it is applied.
AppleScript have the idea of "natural language" matphor , that at some point OO had. For instance Smalltalk. I hope it may be reasonable easier for you to understand methods in objects after reading this.
NOTE: The code is not to be compiled, just to serve as an example. Feel free to modify the post and add Python example.
In OO world, the two are commonly used to mean the same thing.
From a pure Math and CS perspective, a function will always return the same result when called with the same arguments ( f(x,y) = (x + y) ). A method on the other hand, is typically associated with an instance of a class. Again though, most modern OO languages no longer use the term "function" for the most part. Many static methods can be quite like functions, as they typically have no state (not always true).
A function is a mathematical concept. For example:
f(x,y) = sin(x) + cos(y)
says that function f() will return the sin of the first parameter added to the cosine of the second parameter. It's just math. As it happens sin() and cos() are also functions. A function has another property: all calls to a function with the same parameters, should return the same result.
A method, on the other hand, is a function that is related to an object in an object-oriented language. It has one implicit parameter: the object being acted upon (and it's state).
So, if you have an object Z with a method g(x), you might see the following:
Z.g(x) = sin(x) + cos(Z.y)
In this case, the parameter x is passed in, the same as in the function example earlier. However, the parameter to cos() is a value that lives inside the object Z. Z and the data that lives inside it (Z.y) are implicit parameters to Z's g() method.
Methods are functions of classes. In normal jargon, people interchange method and function all over. Basically you can think of them as the same thing (not sure if global functions are called methods).
Methods on a class act on the instance of the class, called the object.
class Example { public int data = 0; // Each instance of Example holds its internal data. This is a "field", or "member variable". public void UpdateData() // .. and manipulates it (This is a method by the way) { data = data + 1; } public void PrintData() // This is also a method { Console.WriteLine(data); } } class Program { public static void Main() { Example exampleObject1 = new Example(); Example exampleObject2 = new Example(); exampleObject1.UpdateData(); exampleObject1.UpdateData(); exampleObject2.UpdateData(); exampleObject1.PrintData(); // Prints "2" exampleObject2.PrintData(); // Prints "1" } }
Since you mentioned Python, the following might be a useful illustration of the relationship between methods and objects in most modern object-oriented languages. In a nutshell what they call a "method" is just a function that gets passed an extra argument (as other answers have pointed out), but Python makes that more explicit than most languages.
# perfectly normal function def hello(greetee): print "Hello", greetee # generalise a bit (still a function though) def greet(greeting, greetee): print greeting, greetee # hide the greeting behind a layer of abstraction (still a function!) def greet_with_greeter(greeter, greetee): print greeter.greeting, greetee # very simple class we can pass to greet_with_greeter class Greeter(object): def __init__(self, greeting): self.greeting = greeting # while we're at it, here's a method that uses self.greeting... def greet(self, greetee): print self.greeting, greetee # save an object of class Greeter for later hello_greeter = Greeter("Hello") # now all of the following print the same message hello("World") greet("Hello", "World") greet_with_greeter(hello_greeter, "World") hello_greeter.greet("World")
Now compare the function
greet_with_greeter and the method
greet: the only difference is the name of the first parameter (in the function I called it "greeter", in the method I called it "self"). So I can use the
greet method in exactly the same way as I use the
greet_with_greeter function (using the "dot" syntax to get at it, since I defined it inside a class):
Greeter.greet(hello_greeter, "World")
So I've effectively turned a method into a function. Can I turn a function into a method? Well, as Python lets you mess with classes after they're defined, let's try:
Greeter.greet2 = greet_with_greeter hello_greeter.greet2("World")
Yes, the function
greet_with_greeter is now also known as the method
greet2. This shows the only real difference between a method and a function: when you call a method "on" an object by calling
object.method(args), the language magically turns it into
method(object, args).
(OO purists might argue a method is something different from a function, and if you get into advanced Python or Ruby - or Smalltalk! - you will start to see their point. Also some languages give methods special access to bits of an object. But the main conceptual difference is still the hidden extra parameter.)
Function is a set of logic that can be used to manipulate data. While, Method is function that is used to manipulate the data of the object where it belongs. So technically, if you have a function that is not completely related to your class but was declared in the class, its not a method; It's called a bad design.
From my understanding a method is any operation which can be performed on a class. It is a general term used in programming.
In many languages methods are represented by functions and subroutines. The main distinction that most languages use for these is that functions may return a value back to the caller and a subroutine may not. However many modern languages only have functions, but these can optionally not return any value.
For example, lets say you want to describe a cat and you would like that to be able to yawn. You would create a Cat class, with a Yawn method, which would most likely be a function without any return value.
IMHO people just wanted to invent new word for easier communication between programmers when they wanted to refer to functions inside objects.
If you are saying methods you mean functions inside the class. If you are saying functions you mean simply functions outside the class.
The truth is that both words are used to describe functions. Even if you used it wrongly nothing wrong happens. Both words describe well what you want to achieve in your code.
Function is a code that has to play a role (a function) of doing something. Method is a method to resolve the problem.
It does the same thing. It is the same thing. If you want to be super precise and go along with the convention you can call methods as the functions inside objects.
Let's not over complicate what should be a very simple answer. Methods and functions are the same thing. You call a function a function when it is outside of a class, and you call a function a method when it is written inside a class.
I know many others have already answered, but I found following is a simple, yet effective single line answer. Though it doesn't look a lot better than others answers here, but if you read it carefully, it has everything you need to know about the method vs function.
A method is a function that has a defined receiver, in OOP terms, a method is a function on an instance of an object.
Difference Between Methods and Functions
From reading this doc on Microsoft
Members that contain executable code are collectively known as the function members of a class. The preceding section describes methods, which are the primary kind of function members. This section describes the other kinds of function members supported by C#: constructors, properties, indexers, events, operators, and finalizers.
So methods are the subset of the functions. Every method is a function but not every function is a method, for example, a
constructor can't be said as a method but it is a function.
A class is the collection of some data and function optionally with a constructor.
While you creating an instance (copy,replication) of that particular class the constructor initialize the class and return an object.
Now the class become object (without constructor) & Functions are known as method in the object context.
So basically
Class <==new==>Object
Function <==new==>Method
In java the it is generally told as that the constructor name same as class name but in real that constructor is like instance block and static block but with having a user define return type(i.e. Class type)
While the class can have an static block,instance block,constructor, function The object generally have only data & method. | http://code.i-harness.com/en/q/25fd9 | CC-MAIN-2019-09 | refinedweb | 2,094 | 63.59 |
In this example we will connect to an MQTT topic, I used a Wemos Lolin32 – you can use any ESP32 development board
We used cloudmqtt which has a free option and then create an instance, you would see something like this
”;
Now if we click on the instance that we created you can find the information you need to enter for the MQTT server
i have removed the username and password from the image below but this will give you an idea of what you will see
Here is the complete code example
#include <WiFi.h> #include <PubSubClient.h>"; WiFiClient espClient; PubSubClient client(espClient);); } } client.publish("esp32/esp32test", "Hello from ESP32learning"); } void loop() { client.loop(); }
Open the serial monitor and you should see something like the following
Connecting to WiFi..
Connecting to WiFi..
Connected to the WiFi network
Connecting to MQTT…
connected
To test this quickly and easily I use MQTTLens in Chrome, you can see in the screen capture below I subscribed to esp32/esp32test and you can also see messages coming through
| http://www.esp32learning.com/code/publishing-messages-to-mqtt-topic-using-an-esp32.php | CC-MAIN-2018-51 | refinedweb | 174 | 55.88 |
prctl man page this bit" flag,, this flag)., where tid is the name of the calling thread.
-, this bit cannot be unset. The setting of this bit is inherited by children created by fork(2) and clone(2), and preserved across execve(2).
Since Linux 4.10, the value of a thread's no_new_privs bit.
-_GET_THP_DISABLE (since Linux 3.15)
Return (via the function result) the current setting of the "THP disable" flag for the calling thread: either 1, if the flag is set, or 0, if it is not.
- PR_GET_TID_ADDRESS (since Linux 3.5)
Retrieve. If the nanosecond value supplied in arg2 is greater than zero, then the "current" value is set to this value. If arg2 is less than values,.
- ENXIO
option was PR_MPX_ENABLE_MANAGEMENT or PR_MPX_DISABLE_MANAGEMENT and the kernel or the CPU does not support MPX management. Check that the kernel and processor have MPX support.
-.
See Also
signal(2), core(5)
Colophon
This page is part of release 4.16 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at.
Referenced By
arch_prctl(2), capabilities(7), capng_change_id(3), capng_lock(3), core(5), environ(7), execve(2), _exit(2), exit(3), fork(2), getpid(2), lttng-ust(3), madvise(2), nbdkit-captive(1), perf_event_open(2), perl5140delta(1), perlvar(1), pid_namespaces(7), proc(5), procenv(1), pthread_setname_np(3), ptrace(2), sd_event_add_time(3), seccomp(2), setpriv(1), syscalls(2), systemd.exec(5), systemd-nspawn(1), systemd-system.conf(5), systemd.timer(5), time(7), wait(2). | https://www.mankier.com/2/prctl | CC-MAIN-2018-43 | refinedweb | 262 | 60.11 |
18 March 2010 13:04 [Source: ICIS news]
(Adds CEO comments throughout)
?xml:namespace>
Major projects such as Borouge 2 - a joint venture with Abu Dhabi National Oil Company (ADNOC) at Ruwais in the United Arab Emirates, which is expected to come online in mid-2010 - meant the company would incur costs in the months between start-ups and products finally reaching customers, as no value in terms of additional sales would be created, according to Garrett.
“We expect 2010 to be even tougher instead of being easier,” Garrett said.
“We think the economy is still nervous, and although we see some recovery, we believe it will be more difficult for our company because we are starting up both Barouge 2 and a LDPE [low density polyethylene] plant at Stenungsund in
However, Garrett added that once the company’s major projects were completed, Borealis would start to reap the rewards and the industry would see the group’s financial figures improve.
The company would continue to focus on innovation, improving operations, cash generation and cost cutting, Garrett said.
The polyolefin maker posted a fourth-quarter net profit of €13m ($17.8m), reversing the heavy €122m loss incurred in the same period of 2008.
It also recorded an operating profit of €11m in the last three months of 2009, against a €199m loss in the previous corresponding period, despite a 5.9% fall in sales to €1.27bn.
For the full year, Borealis’s net profit shrank 84% year on year to €38m as sales plunged 30% to €4.71bn, the company said. Operating profit last year declined 85.3% to €24m.
Reflecting on the results, Garrett was upbeat about Borealis’s performance.
“Despite a tough year for the plastics industry, Borealis achieved a positive result,” Garrett said.
“The figures are a much better result than what we achieved in the boom years... this result was much tougher to achieve because if you look back then, everyone was making money and now we are making money when no one else has been,” he added.
($1 = €0.73)
Additional reporting by Pearl Bantillo
Read Paul Hodges’ Chemicals and the Economy blog
Please visit the complete ICIS plants and projects database | http://www.icis.com/Articles/2010/03/18/9344014/borealis-expects-tough-2010-on-costs-from-new-start-ups.html | CC-MAIN-2015-06 | refinedweb | 368 | 60.04 |
ImportError PySide phonon.so. undefined symbol: _ZTIN6Phonon19AbstractAudioOutputE
Bug Description
When I try to import PySide.
File "/.../videowidg
from PySide.phonon import Phonon
ImportError: /usr/lib/
My code is nearly perfect ('cause it's works well on Arch linux with PySide 1.0.0, Python 2.7.2) but not on Ubuntu 11.10 (python ver: 2.7.2+, PySide ver.: 1.0.6).
If someone can provide me an instant solution I would mention his/her name in my 'Who saved my live' book ;)
Looks like our solution to bug 832864 was bad. I think I know what to do, just doing a test build (and going to sleep while it runs)
Uploaded, waiting for archive-admin review. I tested it by running the phonon examples from the pyside-examples git tree.
Thank you for your fast reaction, now PySide working properly on my pc.
Balazs
Confirmed. It appears that PySide's phonon extension doesn't link to libphonon | https://bugs.launchpad.net/ubuntu/+source/pyside/+bug/867927 | CC-MAIN-2017-22 | refinedweb | 159 | 68.97 |
#include <sys/types.h> #include <sys/buf.h> #include <sys/conf.h> #include <sys/errno.h> #include <sys/ddi.h>
int prefixdevinfo(void *idata, channel_t channel, di_parm_t parm, void *valp);
The type of information that can be returned includes media type of the device (for example: disk, tape, cdrom); device size, and breakup control information (see the bcb(D4) structure).
struct { daddr_t blkno; ushort_t blkoff; };Returns EOPNOTSUPP if the device size is unknown; otherwise, sets valp to the device size and returns 0. If the size is effectively unlimited (such as a tape), set the size to zero.
int prefixdevinfo(dev_t dev, di_parm_t parm, void **valp);dev is the device number.
In DDI versions prior to version 8, the supported parameters are DI_BCBP and DI_MEDIA. The DI_BCBP parameter is the same as DI_RBCBP, except that it applies to B_WRITE operations as well. The size(D2) entry point routine in earlier versions corresponds to the DI_SIZE parameter. All other parameters represent new functionality and have no equivalents in earlier DDI versions.
In DDI versions prior to version 8, devinfo( ) is a named entry point and must be defined as a global symbol.
d_devinfomember of their drvops(D4) structure.
Named entry point routines must be declared in the driver's Master(DSP/4dsp) file. The declaration for this entry point is $entry devinfo. This applies only to non-STREAMS drivers that use DDI versions prior to version 8.
devinfo(D2mdi), devinfo(D2sdi), devinfo(D2str) | http://osr600doc.xinuos.com/en/man/html.D2/devinfo.D2.html | CC-MAIN-2022-21 | refinedweb | 242 | 50.63 |
Solving integral equations with fsolve
Posted January 23, 2013 at 09:00 AM | categories: nonlinear algebra | tags: reaction engineering | View Comments
Updated March 06, 2013 at 04:26 PM
Occasionally we have integral equations we need to solve in engineering problems, for example, the volume of plug flow reactor can be defined by this equation: \(V = \int_{Fa(V=0)}^{Fa} \frac{1}{r_a} dFa\) where \(r_a\) is the rate law. Suppose we know the reactor volume is 100 L, the inlet molar flow of A is 1 mol/L, the volumetric flow is 10 L/min, and \(r_a = -k Ca\), with \(k=0.23\) 1/min. What is the exit molar flow rate? We need to solve the following equation:
$$100 = \int_{Fa(V=0)}^{Fa} \frac{1}{-k Fa/\nu} dFa$$
We start by creating a function handle that describes the integrand. We can use this function in the quad command to evaluate the integral.
import numpy as np from scipy.integrate import quad from scipy.optimize import fsolve k = 0.23 nu = 10.0 Fao = 1.0 def integrand(Fa): return -1.0 / (k * Fa / nu) def func(Fa): integral,err = quad(integrand, Fao, Fa) return 100.0 - integral vfunc = np.vectorize(func)
We will need an initial guess, so we make a plot of our function to get an idea.
import matplotlib.pyplot as plt f = np.linspace(0.01, 1) plt.plot(f, vfunc(f)) plt.xlabel('Molar flow rate') plt.savefig('images/integral-eqn-guess.png') plt.show()
>>> >>> [<matplotlib.lines.Line2D object at 0x964a910>] <matplotlib.text.Text object at 0x961fe50>
Now we can see a zero is near Fa = 0.1, so we proceed to solve the equation.
Fa_guess = 0.1 Fa_exit, = fsolve(vfunc, Fa_guess) print 'The exit concentration is {0:1.2f} mol/L'.format(Fa_exit / nu)
>>> The exit concentration is 0.01 mol/L
1 Summary notes
This example seemed a little easier in Matlab, where the quad function seemed to get automatically vectorized. Here we had to do it by hand.
Copyright (C) 2013 by John Kitchin. See the License for information about copying. | http://kitchingroup.cheme.cmu.edu/blog/2013/01/23/Solving-integral-equations-with-fsolve/ | CC-MAIN-2019-26 | refinedweb | 353 | 60.61 |
See the tutorial on YouTube and hear how you can use it as prank!
If you want a deeper description of how to create a mosaic you should read the following tutorial, which shows the code on how to do it.
You need to install OpenCV. If you use PyCharm you can follow this tutorial.
Otherwise you can install the libraries as follows.
pip install opencv-python pip install numpy pip install xlsxwriter
Please ask if you have troubles with it.
This will just provide the code for you to enjoy. The process code is used from the tutorial linked above, where it is described.
import cv2 import numpy as np import xlsxwriter def create_mosaic_in_excel(photo, box_height, box_width, col_width=2, row_height=15): # Get the height and width of the photo height, width, _ = photo.shape # Create Excel workbook and worksheet workbook = xlsxwriter.Workbook('mosaic.xlsx') worksheet = workbook.add_worksheet("Urgent") # Resize columns and rows worksheet.set_column(0, width//box_width - 1, col_width) for i in range(height//box_height): worksheet.set_row(i, row_height) # Create mosaic for i in range(0, height, box_height): for j in range(0, width, box_width): # Create region of interest (ROI) roi = photo[i:i + box_height, j:j + box_width] # Use numpy to calculate mean in ROI of color channels b_mean = np.mean(roi[:, :, 0]) g_mean = np.mean(roi[:, :, 1]) r_mean = np.mean(roi[:, :, 2]) # Convert mean to int b_mean_int = b_mean.astype(int).item() g_mean_int = g_mean.astype(int).item() r_mean_int = r_mean.astype(int).item() # Create color code color = '#{:02x}{:02x}{:02x}'.format(r_mean_int, g_mean_int, b_mean_int) # Add color code to cell cell_format = workbook.add_format() cell_format.set_bg_color(color) worksheet.write(i//box_height, j//box_width, "", cell_format) # Close and write the Excel sheet workbook.close() def main(): photo = cv2.imread("rune.png") number_cols = 50 number_rows = 45 # Get height and width of photo height, width, _ = photo.shape box_width = width // number_cols box_height = height // number_rows # To make sure that it we can slice the photo in box-sizes width = (width // box_width) * box_width height = (height // box_height) * box_height photo = cv2.resize(photo, (width, height)) # Create the Excel mosaic create_mosaic_in_excel(photo.copy(), box_height, box_width, col_width=2, row_height=15) main()
The above tutorial assumes a photo of me in rune.png. I used the one taken from this page. You should obviously change it to something else.
You can change how many columns and rows in the Excel sheet the mosaic should be. This is done by changing the values of number_cols and number_rows.
Then you can change the values of col_width=2 and row_height=15.
In the YouTube video I use this free picture from Pexels (download) and modify number_cols = 100 and number_rows = 90, and col_width=1 and row_height=6. | https://www.learnpythonwithrune.org/create-photo-mosaic-in-excel-with-python/ | CC-MAIN-2021-25 | refinedweb | 440 | 68.36 |
domain_objects 0.2.1-nullsafety
domain_objects: ^0.2.1-nullsafety copied to clipboard
domain_objects #
domain_objects is an open source project — it's one among many other shared libraries that make up the wider ecosystem of software made and open sourced by
Savannah Informatics Limited.
A shared library for
BeWell-Consumer and
SladeAdvantage Responsible for aggregating core domain objects.
Installation Instructions #
Use this package as a library by depending on it
Run this command:
- With Flutter:
$ flutter pub add domain_objects
This will add a line like this to your package's pubspec.yaml (and run an implicit dart pub get):
dependencies: domain_objects: ^0.2.1-nullsafety
Alternatively, your editor might support flutter pub get. Check the docs for your editor to learn more.
Lastly:
Import it like so:
import 'package:domain_objects/entities.dart';
Usage #
Check the example provided for how to use this package.
Dart & Flutter Version #
- Dart 2: >= 2.14
- Flutter: >=2.0.0
Developing & Contributing #
First off, thanks for taking the time to contribute!
Be sure to check out detailed instructions on how to contribute to this project here and go through out Code of Conduct.
GPG Signing: As a contributor, you need to sign your commits. For more details check here
License #
This library is distributed under the MIT license found in the LICENSE file. | https://pub.dev/packages/domain_objects | CC-MAIN-2022-40 | refinedweb | 217 | 58.08 |
[ Ok, last overview of this thing. ]On Tue, 17 Jul 2007, Satyam Sharma wrote:> On Mon, 16 Jul 2007, Al Viro wrote:> > On Tue, Jul 17, 2007 at 01:00:42AM +0530, Satyam Sharma wrote:> > > > if ((current->fsuid != inode->i_uid) && !capable(CAP_FOWNER))> > > > > > > > test is a rather common test, and in fact, arguably, every time you see > > > > one part of it, you should probably see the other. Would it make sense to > > > > make a helper inline function to do this, and replace all users? Doing a> > > > > > > > git grep 'fsuid.*\<i_uid\>'> > > > > > > > seems to show quite a few cases of this pattern..> > > > > > Yes, I thought of writing a helper function for this myself. The semantics> > > of CAP_FOWNER sort of justify that, but probably better to get Al's views> > > on this first.> > > > Helper makes sense (and most of these places will become its call), but...> > E.g. IIRC the change of UID requires CAP_CHOWN; CAP_FOWNER is not enough.> > Ditto for change of GID.Al, you were was *spot* on here. In fact CAP_FOWNER has no role toplay in chown(2), I think.I did try force-fitting CAP_FOWNER to this thing, and even produceda patch, but note that bullet one and (especially) two of DESCRIPTION in():"Changing the group ID is permitted to a process with an effective userID equal to the user ID of the file, but without appropriate privileges,if and only if owner is equal to the file's user ID or ( uid_t)-1 andgroup is equal either to the calling process' effective group ID or toone of its supplementary group IDs."pretty much rules out any role for CAP_FOWNER. CAP_CHOWN is clearlythe "appropriate privileges" in question here, and force-fittingCAP_FOWNER to "with an effective user ID equal to the user ID of thefile, but *without* appropriate privileges" sounds almost sinful.And, the part about "group is equal either to the *calling* process'effective group ID or to one of its supplementary group IDs" is thelast straw, as follows:This lead to the following behaviour when I tested with that patch(to force-fit CAP_FOWNER into this) applied: a process carryingCAP_FOWNER (say with fsuid == userA) could change the i_gid of a file(say owned by userB) to a supplementary group of userA (calling process)such that userB (the original owner) may not even necessarily be amember of that supplementary group (the new i_gid).This looked like undesirable behaviour to me, so I decided to discardthat patch, and stick to our current behaviour which seems correct.> > setlease() is using CAP_LEASE and that appears> > to be intentional (no idea what relevant standards say here)...I agree here as well, in principle. There is surprisingly littleopen documentation available about CAP_LEASE, so someone might needto check that up, but I would classify this similar to the CAP_CHOWNcase and continue to keep CAP_FOWNER out of it, as it presently is.> > I'd suggest converting the obvious cases with new helper and taking the> > rest one-by-one after that. Some of those might want CAP_FOWNER added,> > some not...> > There aren't too many negative results, here's a little audit:> > fs/attr.c:32:> fs/attr.c:38:> > -> Both are from inode_change_ok(). [ for chown(2) and chgrp(2) ]> -> CAP_FOWNER is not checked for either case, I think it should be.> -> CAP_CHOWN is anyway checked for explicitly later in that condition.So this one was not converted.> fs/namei.c:186: if (current->fsuid == inode->i_uid)> > -> generic_permission().> -> I wonder if CAP_FOWNER processes should ever even be calling into> this function in the first place (?)> -> So best to keep CAP_FOWNER out of this condition (?)> > fs/namei.c:438: if (current->fsuid == inode->i_uid)> > -> exec_permission_lite().> -> This is a clone function of the previous one, so again CAP_FOWNER> out of this (?)Neither these. Seeing from capabilities(7) man page; [...]I think generic_permission() and exec_permission_lite() are preciselythe "operations covered by CAP_DAC_OVERRIDE and CAP_DAC_READ_SEARCH"being mentioned here. So no modification here either.> fs/reiserfs/ioctl.c:54:> fs/xattr.c:63:> -> False positives, CAP_FOWNER checked on line below.> -> Helper would help for both cases.Another thing: I relaxed the grep regexp there a bit and caught a couplemore users in XFS. After some trying-to-audit the 5000-line filescontaining 1000-line functions using non-standard inode structures andweird macros, I can vouch that the the ownership / permission checksthere are just fine :-)So, to summarize: the do_utimes() change was the only problematic pointin the kernel. All other points that _exclude_ a capable(CAP_FOWNER) checkdo so for good reasons.> Anwyay, so I'm thinking of adding:> > struct inode;> > int is_not_owner(struct inode *)> {> return ((current->fsuid != inode->i_uid) && !capable(CAP_FOWNER));> }> > to linux/capability.h inside the __KERNEL__ #ifdef, asm/current.h is> included in there already.On second thoughts, I decided it was much more tasteful to keep thisfunction in include/linux/fs.h. This also meant we have a macro now,and not an inline function (sadly) because of header file issues thatcaused build breakage. I could've avoided the breakage by #include'ingsched.h from fs.h, but decided against it. (Probably explains whyget_fs_excl() and friends are also macros and not inline functions inthat file.)Also, named it "is_owner_or_cap(inode)". Patch follows next.Satyam-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at | https://lkml.org/lkml/2007/7/17/122 | CC-MAIN-2019-18 | refinedweb | 891 | 56.76 |
Important: Please read the Qt Code of Conduct -
Making Nested Components Visible
I used to have my components and pages in the same directory and decided that it would make more sense to separate them out. So if I have homepage.qml in the pages directory that wants to use DropDownMenu.qml in the components directory, how can I do this?
- DenisKormalev last edited by
Something like this will help you I think
@
import "../components"
@
I will look into this. Thanks.
So I got that working but now the design module in qt creator cannot see my images. How can I fix this? | https://forum.qt.io/topic/3249/making-nested-components-visible/2 | CC-MAIN-2022-05 | refinedweb | 103 | 75.61 |
See also : SyntaxExtensionMechanism
[SimonWillison] This example posted to show an alternative way of dealing with entry authors and contributors, where their personal details are provided once in a contributors block at the start of the feed, then <author ref="bob" /> style elements are used to indicate authorship of entries by a pre-designated author. This elimniates duplicated/redundant data in the entry feeds, reducing the size of the feed and the number of elements that an application must parse in order to understand the feed.
<?xml version="1.0" encoding="utf-8"?> <feed xmlns="" xmlns: <title>My First Weblog</title> <link></link> <modified>2003-02-05T12:29:29Z</modified> <contributors> <person id="bob"> <name>Bob B. Bobbington</name> <homepage></homepage> <weblog></weblog> <email>bob@bobbington.org</email> </person> <person id="yoyo"> <name>Yo-Yo Dyne</name> <homepage></homepage> <weblog></weblog> <email>yoyo@bobbington.org</email> </person> </contributors> <entry> <title>My First Entry</title> <summary>A very boring entry; just learning how to blog here...</summary> <author ref="bob" /> <contributor ref="yoyo" /> <!-- and another couple contributors could go here --> <link></link> <id></id> <created>2003-02-05T12:29:29Z</created> <issued>2003-02-05T08:29:29-04:00</issued> <modified>2003-02-05T12:29:29Z</modified> <content type="application/xhtml+xml" xml: <p xmlns="...">Hello, <em>weblog</em> world! 2 < 4!</p> </content> </entry> </feed>
[GeorgBauer] +1, I like this approach, as it reduces volume. Maybe two forms of reference, one local and one external to FOAF representations?
[SimonWillison] An external reference to a FOAF representation is an interesting idea, but I'm worried that it would raise the barrier of entry for Atom applications - now instead of just understanding Atom they would have to be able to retrieve other files over HTTP and understand FOAF as well. A FOAF link in the person section would be a good idea though, maybe even as a namespaced extension:
... <contributors> <person id="bob"> <name>Bob B. Bobbington</name> <homepage></homepage> <weblog></weblog> <foaf:url></foaf:url> </person> </contributor> ...
[DannyAyers] This looks ok, but what does it mean? Ok, so presumably here you're saying that the foaf:url applies to the person, but I don't think the interpretation has been formalised anywhere. What if you wanted to include the whole FOAF profile? I think this is one special case of which there are likely to be loads(e.g. topics, threads) - so I reckon what's needed is a rock-solid, well-defined general purpose extension mechanism.
[PeteProdoehl] +1, I too think this is a good approach. Are there any negatives to doing it this way rather than specifying author/contributor on a per item basis?
[MartinAtkins] Can we maybe give the people globally-unique IDs too, so that we can see if the John Smith in one document is the same John Smith mentioned in another? Some kind of URI seems to be the standard way of doing GUIDs elsewhere, but making sure the same person is always referenced by the same URI would be tricky. Any other ideas? (note that I'm not proposing replacing the 'id' attribute of the person element, which is local to a given feed and thus can be short, which is desirable.)
[GeorgBauer] Actually a URL with mailto: and the email address should make a usefull GUID for persons. Or a http: URL to a contact form. So maybe some way of contacting the person directly, withoug looking through the homepage. On the other hand, the homepage itself is useable as a GUID, too. Two persons are unlikely to have the same name and homepage
[AsbjornUlsberg, RefactorOk] I still dislike the <homepage> and <weblog> items, and in my example (that has been ereased) these were replaced with <link>. <link rel="foaf" href="" /> could very well work as a reference to Bob's FOAF instance. Globally unique person or author ID's are typically email adresses. I can't see why email adress can't be required...
[GaryF, RefactorOK] Agreed. These element's names seem very restrictive and assuming in what Echo will be used for (i.e blogs only). I suggest we allow an arbitary number of <url> tags, where the first tag is the most important (perhaps a blog for an individul, or a homepage for a corporation).
[DamianCugley] +1 on using <link rel= href= /> everywhere.
With a short list of blogging-oriented values for the rel attribute.
[SimonWillison] I think email addresses in feeds are best avoided as they would be a God-send to spam harvesters - were they to be used as the basis of a unique ID I think the best approach would be to use a cryptographic hash of the address, like the sah_mailbox thing in FOAF.
[AdamRice] +1 for Simon's proposal in general. -1 on e-mail addresses as author IDs (or any kind of required element) because of spam. +1 on use of the Link tag: Keeps the number of unique tags to a minimum.
[MortenFrederiksen] An element with the same content as the mbox_sha1sum of FoaF would be great for interop - especially for trust networks. It keeps spammers out, while still establishing identity.
[JimDabell] +1 on Simon's idea, I don't see any downside. I would also like some way of resolving an author to a document describing that person in detail (FOAF-like). Perhaps make the id a URI? Inventing a person: URN would be possible for non-URL ids, I can't think of a reason why we couldn't have id="person:hash-of-email".
[PeteProdoehl] +1 on the email address idea, though the 'hash-of-email' thingy might be a little to advanced for some users, since you need some sort of generator to create it, rather than just typing in an email address. I know, avoid spam, etc... but there's a good chance an email address is going to appear elsewhere in the file anyway, right? I'd vote for plain email address over obscured and hashed. Besides, it's not like hashing the email address is going to make the spam problem go away.
[PatrickLioi] +1 for me too. Showing the same person's <homepage> etc on every single <entry> is just begging for a fix like this.
[GregReinacker] I think email should be optional, but not required, for two reasons. First, the obvious spam problem. Second, not all feeds are from a person, and thus may not easily be able to support an email address - they would have to synthesize one, which isn't necessarily a good idea.
[GaryF] +1 on moving author details to feed-level like this. -1 for using email as a GUID. I think a GUID for users is a great idea, but this is not the way to do it. Hash of email would be fine. What is the most widespread hashing algorithm (or similar)? I would go for md5, since I've seen implementations in almost every language.
[RogerBenningfield] A reluctant -1. I sincerely like the idea, and it seems to be quite efficient. Unfortunately, it would probably require some changes to the way many tools generate feeds. For example, you can't just loop over the content twice, once for contributors and once for entries... if you do, you'll just end up with duplicated data in a new location. Tools would have to be revised to specifically allow a list of *unique* authors to be dumped independently of the entries themselves. I'd be willing to dedicate 30 minutes to changing my own code, but I'm hesitant to demand that other people do the same.
[JimDabell] Roger: A few possibilities to allay your concerns:
Output buffering seems to be the easiest solution to this trouble - why can't people just build up the <contributors> and <entry> elements before outputting them both?
Multiple <contributors> elements. In the rare cases where you cannot loop twice or buffer output, stick a <contributors> element before each <entry> element that has a previously undefined author. This will increase reading complexity a little.
Put the <contributors> element after all <entry> elements. May be more annoying to read.
Allow feeds to insert <person> elements directly within <author> elements, and leave ref="" as an optimisation.
I don't particularly like 2-4, primarily because they make things more difficult, but I really don't see a problem with 1.
[MichelValdrighi] Here's another way, that requires some logical changes in requirements of <author> elements. This assumes that an author's details are the same throughout the feed. One would only have to put the whole collection of <author> tags once, and then following entries by the same author would only need to use <author><name>John Smith</name></author>. Only problem with this approach is when two contributors bear the same name, but then again this is something that should be handled by the content producer already.
[AdamRice] Roger--You're right. But if we call this a desirable optimization for one's feed, not a requirement, that should work. Suggest harmonizing the two points of view thus: If a feed contains <author ref="Alice" /> with no further qualifications, there must be a "contributors" section. Otherwise, it is valid to have contributor-info inline and have an empty "contributors" section (but keep it there as a flag--"I haven't forgotten about this, I'm choosing to leave it empty").
[DamianCugley] How about allowing a choice between two elements, <authorRef href="#bob"/> or <author>... complete entry included inline ... </author>. Whichever the feed generating software finds most convenient to issue will depend on whether the feed is single-author or mixed. So long as the 'person' definitions preceed their use in references, consumers of feeds should not have too much trouble with either variation.
[Arien, RefactorOk] I propose having author info completely external to the feed: every author should have a URI. The analogue case for a URI for every entry is discussed in RestEchoApiOneUriForEachEntry. (Copied from EchoExample)
[EricScheid] +1. We could also use the same method to pre-define other bulky defaults like <copyright>, pulling them into individual entries as required. Especially useful when there is a mix of copyrights in the feed.
[JeremyGray] +1 to the general concept, with the following notes (in descending order of importance to me):
I would prefer to see authors and contributors always in the linked form, not in linked or nested form. Using a single form will keep things simple and consistent.
With various kinds of referencing starting to appear in the Wiki, on this page for author and contributor references and elsewhere regarding relation of trackbacks and comments to other entries (as well as other discussions surrounding 'threading'-related issues), might it be worth selecting a single reference mechanism for use across the spec, whether it be an 'HTML' style or 'XLink' style (as coined by another contributor elsewhere in the Wiki) ? Would a ReferenceMechanismDiscussion page do the trick?
FOAF should be addressed separately, as an extension
For those suggesting unique identifiers of some kind for authors, I agree they would prove quite useful. Direct exposure of email addresses does raise some concerns for some people, though with the power available in current anti-spam technology I hardly worry about it any more, personally. That said, identifiers could easily be generated using a hash i.e. as just recently mentioned (although in a FOAF-specific context, but still relevant) in
Persona Hash Key?
Given <contributors> should <person> not be <contributor> for sake of consistency? Does anyone see a present or reasonable future need for non-person contributors, and if so, how should that distinction be made (different elements vs. a single contributor element with sub-typing within) ?
CategoryModel, CategorySyntax | http://www.intertwingly.net/wiki/pie/EchoFeedWithAuthorRefs | CC-MAIN-2019-18 | refinedweb | 1,951 | 54.73 |
Overview
- The Python Style Guide will enable you to write neat and beautiful Python code
- Learn the different Python conventions and other nuances of Python programming in this Style Guide
Introduction
Have you ever come across a really poorly written piece of Python code? I’m talking about a tangled mess where you had to spend hours just trying to understand what piece of code goes where. I know a lot of you will nod your head at this.
Writing code is one part of a data scientist’s or analyst’s role. Writing beautiful and neat Python code, on the other hand, is a different ball game altogether. This could well make or break your image as a proficient programmer in the analytics or data science space (or even in software development).
Remember – our Python code is written once, but read a billion times over, potentially by viewers who are not accustomed to our style of programming. This takes on even more importance in data science. So how do we write this so-called beautiful Python code?
Welcome to the Python Style Guide!
A lot of folks in the data science and analytics domains come from a non-programming background. We start off by learning the basics of programming, move on to comprehend the theory behind machine learning, and then get cracking on the dataset. In this process, we often do not practice hardcore programming or pay attention to programming conventions.
That’s the gap this Python Style Guide will aim to address. We will go over the programming conventions for Python described by the PEP-8 document and you’ll emerge as a better programmer on the other side!
Are you completely new to Python programming? Then I’d suggest first taking the free Python course before understanding this style guide.
Python Style Guide Contents
- Why this Python Style Guide is Important for Data Science?
- What is PEP8?
- Understanding Python Naming Conventions
- Python Style Guide’s Code Layout
- Getting Familiar with using Comments
- Whitespaces in your Python Code
- General Programming Recommendations for Python
- Autoformatting your Python code
Why this Python Style Guide is Important for Data Science?
There are a couple of reasons that make formatting such an important aspect of programming, especially for data science projects:
- Readability
Good code formatting will inevitably improve the readability of your code. This will not only present your code as more organized but will also make it easier for the reader to easily understand what is going on in the program. This will especially be helpful if your program runs into thousands of lines. With so many dataframe, lists, functions, plots, etc. you can easily lose track of even your own code if you don’t follow the correct formatting guidelines!
- Collaboration
If you are collaborating on a team project, which most data scientists will be, good formatting becomes an essential task. This makes sure that the code is understood correctly without any hassle. Also, following a common formatting pattern maintains consistency in the program throughout the project lifecycle.
- Bug fixes
Having a well-formatted code will also help you when you need to fix bugs in your program. Wrong indentation, improper naming, etc. can easily make debugging a nightmare! Therefore, it is always better to start off your program on the right note!
With that in mind, let’s have a quick overview of the PEP-8 style guide we will cover in this article!
What is PEP-8?
PEP-8, or Python Enhancement Proposal, is the style guide for Python programming. It was written by Guido van Rossum, Barry Warsaw, and Nick Coghlan. It describes the rules for writing a beautiful and readable Python code.
Following the PEP-8 style of coding will make sure there is consistency in your Python code, making it easier for other readers, contributors, or yourself, to comprehend it.
This article covers the most important aspects of the PEP-8 guidelines, like how to name Python objects, how to structure your code, when to include comments and whitespaces, and finally, some general programming recommendations that are important but easily overlooked by most Python programmers.
Let’s learn to write better code!
The official PEP-8 documentation can be found here.
Understanding Python Naming Convention
Shakespeare famously said – “What’s in a name?”. If he had encountered a programmer back then, he would have had a swift reply – “A lot!”.
Yes, when you write a piece of code, the name you choose for the variables, functions, and so on, has a great impact on the comprehensibility of the code. Just have a look at the following piece of code:
# Function 1 def func(x): a = x.split()[0] b = x.split()[1] return a, b print(func('Analytics Vidhya')) # Function 2 def name_split(full_name): first_name = full_name.split()[0] last_name = full_name.split()[1] return first_name, last_name print(name_split('Analytics Vidhya'))
# Outputs ('Analytics', 'Vidhya') ('Analytics', 'Vidhya')
Both the functions do the same job, but the latter one gives a better intuition as to what it is happening under the hood, even without any comments! That is why choosing the right names and following the right naming convention can make a huge difference while writing your program. That being said, let’s look at how you should name your objects in Python!
Try the above code in the live coding window below.
General Tips to Begin With
These tips can be applied to name any entity and should be followed religiously.
- Try to follow the same pattern – consistency is the key!
thisVariable, ThatVariable, some_other_variable, BIG_NO
- Avoid using long names while not being too frugal with the name either
this_could_be_a_bad_name = “Avoid this!” t = “This isn\’t good either”
- Use sensible and descriptive names. This will help later on when you try to remember the purpose of the code
X = “My Name” # Avoid this full_name = “My Name” # This is much better
- Avoid using names that begin with numbers
1_name = “This is bad!”
- Avoid using special characters like @, !, #, $, etc. in names
phone_ # Bad name
Naming Variables
- Variable names should always be in lowercase
blog = "Analytics Vidhya"
- For longer variable names, include underscores to separate_words. This improves readability
awesome_blog = "Analytics Vidhya"
- Try not to use single-character variable names like ‘I’ (uppercase i letter), ‘O’ (uppercase o letter), ‘l’ (lowercase L letter). They can be indistinguishable from numerical 1 and 0. Have a look:
O = 0 + l + I + 1
- Follow the same naming convention for Global variables
Naming Functions
- Follow the lowercase with underscores naming convention
- Use expressive names
# Avoid def con(): ... # This is better. def connect(): ...
- If a function argument name clashes with a keyword, use a trailing underscore instead of using an abbreviation. For example, turning break into break_ instead of brk
# Avoiding name clashes. def break_time(break_): print(“Your break time is”, break_,”long”)
Class names
- Follow CapWord (or camelCase or StudlyCaps) naming convention. Just start each word with a capital letter and do not include underscores between words
# Follow CapWord convention class MySampleClass: pass
- If a class contains a subclass with the same attribute name, consider adding double underscores to the class attribute
This will make sure the attribute __age in class Person is accessed as _Person__age. This is Python’s name mangling and it makes sure there is no name collision
class Person: def __init__(self): self.__age = 18 obj = Person() obj.__age # Error obj._Person__age # Correct
class CustomError(Exception): “””Custom exception class“””
Naming Class Methods
- The first argument of an instance method (the basic class method with no strings attached) should always be self. This points to the calling object
- The first argument of a class method should always be cls. This points to the class, not the object instance
class SampleClass: def instance_method(self, del_): print(“Instance method”) @classmethod def class_method(cls): print(“Class method”)
Package and Module names
- Try to keep the name short and simple
- The lowercase naming convention should be followed
- Prefer underscores for long module names
- Avoid underscores for package names
testpackage # package name sample_module.py # module name
Constant names
- Constants are usually declared and assigned values within a module
- The naming convention for constants is an aberration. Constant names should be all CAPITAL letters
- Use underscores for longer names
# Following constant variables in global.py module PI = 3.14 GRAVITY = 9.8 SPEED_OF_Light = 3*10**8
Python Style Guide’s Code Layout
Now that you know how to name entities in Python, the next question that should pop up in your mind is how to structure your code in Python! Honestly, this is very important, because without proper structure, your code could go haywire and can be the biggest turn off for any reviewer.
So, without further ado, let’s get to the basics of code layout in Python!
Indentation
It is the single most important aspect of code layout and plays a vital role in Python. Indentation tells which lines of code are to be included in the block for execution. Missing an indentation could turn out to be a critical mistake.
Indentations determine which code block a code statement belongs to. Imagine trying to write up a nested for-loop code. Writing a single line of code outside its respective loop may not give you a syntax error, but you will definitely end up with a logical error that can be potentially time-consuming in terms of debugging.
Follow the below mentioned key points on indentation for a consistent structure for your Python scripts.
- Always follow the 4-space indentation rule
# Example if value<0: print(“negative value”) # Another example for i in range(5): print(“Follow this rule religiously!”)
- Prefer to use spaces over tabs
It is recommended to use Spaces over Tabs. But Tabs can be used when the code is already indented with tabs.
if True: print('4 spaces of indentation used!')
- Break large expressions into several lines
There are several ways of handling such a situation. One way is to align the succeeding statements with the opening delimiter.
# Aligning with opening delimiter. def name_split(first_name, middle_name, last_name) # Another example. ans = solution(value_one, value_two, value_three, value_four)
A second way is to make use of the 4-space indentation rule. This will require an extra level of indentation to distinguish the arguments from the rest of the code inside the block.
# Making use of extra indentation. def name_split( first_name, middle_name, last_name): print(first_name, middle_name, last_name)
Finally, you can even make use of “hanging indents”. Hanging indentation, in the context of Python, refers to the text style where the line containing a parenthesis ends with an opening parenthesis. The subsequent lines are indented until the closing parenthesis.
# Hanging indentation. ans = solution( value_one, value_two, value_three, value_four)
- Indenting if-statements can be an issue
if-statements with multiple conditions naturally contain 4 spaces – if, space, and the opening parenthesis. As you can see, this can be an issue. Subsequent lines will also be indented and there is no way of differentiating the if-statement from the block of code it executes. Now, what do we do?
Well, we have a couple of ways to get our way around it:
# This is a problem. if (condition_one and condition_two): print(“Implement this”)
One way is to use an extra level of indentation of course!
# Use extra indentation. if (condition_one and condition_two): print(“Implement this”)
Another way is to add a comment between the if-statement conditions and the code block to distinguish between the two:
# Add a comment. if (condition_one and condition_two): # this condition is valid print(“Implement this”)
- Brackets need to be closed
Let’s say you have a long dictionary of values. You put all the key-value pairs in separate lines but where do you put the closing bracket? Does it come in the last line? The line following it? And if so, do you just put it at the beginning or after indentation?
There are a couple of ways around this problem as well.
One way is to align the closing bracket with the first non-whitespace character of the previous line.
# learning_path = { ‘Step 1’ : ’Learn programming’, ‘Step 2’ : ‘Learn machine learning’, ‘Step 3’ : ‘Crack on the hackathons’ }
The second way is to just put it as the first character of the new line.
learning_path = { ‘Step 1’ : ’Learn programming’, ‘Step 2’ : ‘Learn machine learning’, ‘Step 3’ : ‘Crack on the hackathons’ }
- Break line before binary operators
If you are trying to fit too many operators and operands into a single line, it is bound to get cumbersome. Instead, break it into several lines for better readability.
Now the obvious question – break before or after operators? The convention is to break before operators. This helps to easily make out the operator and the operand it is acting upon.
# Break lines before operator. gdp = (consumption + government_spending + investment + net_exports )
Using Blank Lines
Bunching up lines of code will only make it harder for the reader to comprehend your code. One nice way to make your code look neater and pleasing to the eyes is to introduce a relevant amount of blank lines in your code.
- Top-level functions and classes should be separated by two blank lines
# Separating classes and top level functions. class SampleClass(): pass def sample_function(): print("Top level function")
- Methods inside a class should be separated by a single blank line
# Separating methods within class. class MyClass(): def method_one(self): print("First method") def method_two(self): print("Second method")
- Try not to include blank lines between pieces of code that have related logic and function
def remove_stopwords(text): stop_words = stopwords.words("english") tokens = word_tokenize(text) clean_text = [word for word in tokens if word not in stop_words] return clean_text
- Blank lines can be used sparingly within functions to separate logical sections. This makes it easier to comprehend the code
def remove_stopwords(text): stop_words = stopwords.words("english") tokens = word_tokenize(text) clean_text = [word for word in tokens if word not in stop_words] clean_text = ' '.join(clean_text) clean_text = clean_text.lower() return clean_text
Maximum line length
- No more than 79 characters in a line
When you are writing code in Python, you cannot squeeze more than 79 characters into a single line. That’s the limit and should be the guiding rule to keep the statement short.
- You can break the statement into multiple lines and turn them into shorter lines of code
# Breaking into multiple lines. num_list = [y for y in range(100) if y % 2 == 0 if y % 5 == 0] print(num_list)
Imports
Part of the reason why a lot of data scientists love to work with Python is because of the plethora of libraries that make working with data a lot easier. Therefore, it is given that you will end up importing a bunch of libraries and modules to accomplish any task in data science.
- Should always come at the top of the Python script
- Separate libraries should be imported on separate lines
import numpy as np import pandas as pd df = pd.read_csv(r'/sample.csv')
- Imports should be grouped in the following order:
- Standard library imports
- Related third party imports
- Local application/library specific imports
- Include a blank line after each group of imports
import numpy as np import pandas as pd import matplotlib from glob import glob import spaCy import mypackage
- Can import multiple classes from the same module in a single line
from math import ceil, floor
Getting Familiar with Proper Python Comments
Understanding an uncommented piece of code can be a strenuous activity. Even for the original writer of the code, it can be difficult to remember what exactly is happening in a code line after a period of time.
Therefore, it is best to comment on your code then and there so that the reader can have a proper understanding of what you tried to achieve with that particular piece of code.
General Tips for Including Comments
- Always begin the comment with a capital letter
- Update the comment as and when you update your code
- Avoid comments that state the obvious
Block Comments
- Describe the piece of code that follows them
- Have the same indentation as the piece of code
- Start with a # and a single space
# Remove non-alphanumeric characters from user input string. import re raw_text = input(‘Enter string:‘) text = re.sub(r'\W+', ' ', raw_text)
Inline comments
- These are comments on the same line as the code statement
- Should be separated by at least two spaces from the code statement
- Starts with the usual # followed by a whitespace
- Do not use them to state the obvious
- Try to use them sparingly as they can be distracting
info_dict = {} # Dictionary to store the extracted information
Documentation String
- Used to describe public modules, classes, functions, and methods
- Also known as “docstrings”
- What makes them stand out from the rest of the comments is that they are enclosed within triple quotes ”””
- If docstring ends in a single line, include the closing “”” on the same line
- If docstring runs into multiple lines, put the closing “”” on a new line
def square_num(x): """Returns the square of a number.""" return x**2 def power(x, y): """Multiline comments. Returns x raised to y. """ return x**y
Whitespaces in your Python Code
Whitespaces are often ignored as a trivial aspect when writing beautiful code. But using whitespaces correctly can increase the readability of the code by leaps and bounds. They help prevent the code statement and expressions from getting too crowded. This inevitably helps the readers to go over the code with ease.
Key points
- Avoid putting whitespaces immediately within brackets
# Correct way df[‘clean_text’] = df[‘text’].apply(preprocess)
- Never put whitespace before a comma, semicolon, or colon
# Correct name_split = lambda x: x.split() # Correct
- Don’t include whitespaces between a character and an opening bracket
# Correct print(‘This is the right way’) # Correct for i in range(5): name_dict[i] = input_list[i]
- When using multiple operators, include whitespaces only around the lowest priority operator
# Correct ans = x**2 + b*x + c
- In slices, colons act as binary operators
They should be treated as the lowest priority operators. Equal spaces must be included around each colon
# Correct df_valid = df_train[lower_bound+5 : upper_bound-5]
- Trailing whitespaces should be avoided
- Don’t surround = sign with whitespaces when indicating function parameters
def exp(base, power=2): return base**power
- Always surround the following binary operators with single whitespace on either side:
- Assignment operators (=, +=, -=, etc.)
- Comparisons (==, <, >, !=, <>, <=, >=, in, not in, is, is not)
- Booleans (and, or, not)
# Correct brooklyn = [‘Amy’, ‘Terry’, ‘Gina’, 'Jake'] count = 0 for name in brooklyn: if name == ‘Jake’: print(‘Cool’) count += 1
General Programming Recommendations for Python
Often, there are a number of ways to write a piece of code. And while they achieve the same task, it is better to use the recommended way of writing cleaner code and maintain consistency. I’ve covered some of these in this section.
- For comparison to singletons like None, always use is or is not. Do not use equality operators
# Wrong if name != None: print("Not null") # Correct if name is not None: print("Not null")
- Don’t compare boolean values to TRUE or FALSE using the comparison operator. While it might be intuitive to use the comparison operator, it is unnecessary to use it. Simply write the boolean expression
# Correct if valid: print("Correct") # Wrong if valid == True: print("Wrong")
- Instead of binding a lambda function to an identifier, use a generic function. Because assigning lambda function to an identifier defeats its purpose. And it will be easier for tracebacks
# Prefer this def func(x): return None # Over this func = lambda x: x**2
- When you are catching exceptions, name the exception you are catching. Do not just use a bare except. This will make sure that the exception block does not disguise other exceptions by KeyboardInterrupt exception when you are trying to interrupt the execution
try: x = 1/0 except ZeroDivisionError: print('Cannot divide by zero')
- Be consistent with your return statements. That is to say, either all return statements in a function should return an expression or none of them should. Also, if a return statement is not returning any value, return None instead of nothing at all
# Wrong def sample(x): if x > 0: return x+1 elif x == 0: return else: return x-1 # Correct def sample(x): if x > 0: return x+1 elif x == 0: return None else: return x-1
- If you are trying to check prefixes or suffixes in a string, use ”.startswith() and ”.endswith() instead of string slicing. These are much cleaner and less prone to errors
# Correct if name.endswith('and'): print('Great!')
Autoformatting your Python code
Formatting won’t be a problem when you are working with small programs. But just imagine having to follow the correct formatting rules for a complex program running into thousands of lines! This will definitely be a difficult task to achieve. Also, most of the time, you won’t even remember all of the formatting rules. So, how do we fix this problem? Well, we could use some autoformatters to do the job for us!
Autoformatter is a program that identifies formatting errors and fixes them in place. Black is one such autoformatter that takes the burden off your shoulders by automatically formatting your Python code to one that conforms to the PEP8 style of coding.
You can easily install it using pip by typing the following command in the terminal:
pip install black
But let’s see how helpful black actually is in the real world scenario. Let’s use it to formats the following poorly typed program:
Now, all we have to do is, head over to the terminal and type the following command:
black style_script.py
Once you have done that, if there are any formatting changes to be made, black would have already done that in place and you will get the following message:
These changes will be reflected in your program once you try to open it again:
As you can see, it has correctly formatted the code and will definitely be helpful in case you miss out on any of the formatting rules.
Black can also be integrated with editors like Atom, Sublime Text, Visual Studio Code, and even Jupyter notebooks! This will surely be one extension you can never miss to add-on to your editors.
Besides black, there are other autoformatters like autopep8 and yapf which you can try out as well!
End Notes
We have covered quite a lot of key points under the Python Style Guide. If you follow these consistently throughout your code, you are bound to end up with a cleaner and readable code.
Also, following a common standard is beneficial when you are working as a team on a project. It makes it easier for other collaborators to understand. Go ahead and start incorporating these style tips in your Python code!You can also read this article on our Mobile APP
5 Comments
Very Good !
Hi! Thanks for the article.
I have found two minor typos:
In the Brooklyn example, you seem to have missed the single quotation marks for ‘Jake’.
When referring to your article about lambda functions, you only used “amda function” for the hyperlink.
Thanks for pointing out the suggested formatting erros!
I am completely new in python programming. Can you please suggest from where I can learn python basics
Hi
You can check out our free course on Python for Data Science –. It goes through the basics of Python programming language in the beginning which should solve your purpose.
I hope this helps. | https://www.analyticsvidhya.com/blog/2020/07/python-style-guide/ | CC-MAIN-2020-34 | refinedweb | 3,915 | 60.65 |
Over the coming weeks I'm embarking on a journey through the wonders and mysteries of Windows Vista's Network Access Protection (NAP).
All of the content for this journey will be tagged as "JourneyThrough: Network Access Protection".
Through a series of blog posts I'll share with you details of what's possible with Windows XP and Server 2003 to reduce the risk of machine that fail to comply with corporate security policy through Network Quarantine (a feature of Windows Server 2003) and IPsec.
I'll share with you the background and context for this important new technology. I'm writing the content myself and linking to interesting documents as well providing my own commentary and suggestions.
This material is grounded in reality rather than marketing spin; it’s a technical guide which will help you learn about how to secure network access by asserting and enforcing the security policy compliance (health) of client machines BEFORE granting them access to sensitive “internal” networks.
Modern information workers typically take advantage of seamless access to whatever internet access is available to them. Think about your daily use of network resources. I use a 3Mb/sec DSL (Digital Subscriber Line) connection at home, the corporate wireless and wired connections when I’m in the office and cyber café wireless access when I’m out and about. Sometimes I also use hotel and customer/partner network access too. No longer is it safe to assume that “the network protects me” from all ills. In fact it’s often “the network” that carries the malicious software from other peoples’ poorly configured / poorly patched systems. Consider what happens when you return from holiday. Often a security update (formerly referred to as a patch) is released while you’re away. When you return (either remotely or in the office) your system is susceptible to exploitation via the vulnerability until you update it. Both quarantine (for VPN connections) and NAP (for all connections) will reject connection requests (if so configured to do so) for un-patched systems thereby saving other members of the network from infection. These technologies work in conjunction with personal firewalls (such as the one built into XP SP2) which reject unsolicited incoming connection requests from other hosts. Theoretically this will prevent worms infecting such un-patched systems. Defense in depth best practice dictates that both quarantine/NAP AND personal firewalls should BOTH be used to provide effective security.
The quarantine feature in Windows Server 2003 is a “no additional charge” (free) feature of the operating system that enables us to force VPN (Routing and Remote Access – RRAS) clients to prove that they comply with the prime aspects of our information security policy BEFORE granting them access to the internal network. If you’ve ever worked from home and established a full network connection (Virtual Private Network – VPN) to corpnet then you’ve used our quarantine implementation. The Connection Manager Administration Kit (CMAK) is used in conjunction with Remote Quarantine Client (RQC) and Remote Quarantine Server (RQS) to implement quarantine. Microsoft employees (who work remotely) run “Connection Manager” to initiate the VPN client and integrated Quarantine functionality. Remediation is a unique benefit of both Quarantine and NAP which enables users to bring their systems into policy compliance if they are initially denied access.
NAP is essentially a next generation of quarantine bringing in support for IPsec, DHCP, 802.1X (port based authentication), RRAS (VPN) enforcement points. Client machines must prove that they comply with corporate policy (i.e. are “healthy”) BEFORE connecting to corporate resources by wired, wireless and remote access.
Stay tuned and please provide your feedback in the customary "comment" manner.
Just a brief comment about the lmauriayp feature (free).
I've worked a bit with the RQC client, when I did setup an ISA server 2004 wtih VPN and used the quarantine network. I can see the idee and the use for it, but as an administrator, and not a programmer I had some serius problems actually implementing the features I would like to check for.
I know it can't all be "click" and use, but still I hope that the Quarantine access implementing in the feature will be made a bit easy'er.
It's difficult for an administrator to make a small user interface, so the user know if they get access or what they need to comply with the company security policy.
So perhabs it could come with a default script using windows XP SP2 security center which can check for the antivirus is uptodate parameter.
I personal check for Antivirus (uptodate), Firewall enabled, ICS disabled and then SP2.
Looking forward to the rest of the journey and thanks alot for spending some time on these topisc :-)
I agree Benjamin, RQS/RQC wasn't written with the IT guy in mind. We thought more of the IT-Dev, who needed a solution *today*. There was a ton of pressure to supply it, even from our own internal IT group.
We hope to solve this in Longhorn Server with NAP, and give complete policy based control to the admin (through NPS, formally IAS/RADIUS).
I am curious about the experience Steve is going to have setting it all up. I will come back here to check it out. :->
I am going to be posting Beta 2 screen shots this week from my NAP demo rig. I also really want to web cast a live demo of NAP in action. This stuff is real!
Jeff Sigman [MSFT]
NAP Release Manager
jeff.sigman@online.microsoft.com
"JourneyThrough" is a term I made up last week to signify a way of linking a series of blog entries...
Hi Jeff, Thanks for the answer :-) Actually I think I forgot to mention in my replay, that I was looking forward to NAP since (from the NAP teams blog) it looks like it will have some user friendly graphic. Only question is if it will be integrated with the ISA VPN (I guess so) *Grins* But I will follow this blog and the NAP teams blog closely, since this seems like a great way to handle security on the LAN in a company and something that we for sure can use. Keep up the good work :-) Yours Sincerely, Benjamin | http://blogs.technet.com/b/steve_lamb/archive/2006/05/05/427378.aspx | CC-MAIN-2014-41 | refinedweb | 1,045 | 50.87 |
Programming Python, Part I
A blog is not a blog if we can't add new posts, so let's do that:
>>> blog = blog + ["A new post."] >>> blog ['My first post', 'Python is cool', 'A new post.']
Here we set blog to a new value, which is the old blog, and a new post. Remembering all that merely to add a new post is not pleasant though, so we can encapsulate it in what is called a function:
>>> def add_post(blog, new_post): ... return blog + [new_post] ... >>>
def is the keyword used to define a new function or method (more on functions in structured or functional programming and methods in object-oriented programming later in this article). What follows is the name of the function. Inside the parentheses, we have the formal parameters. Those are like variables that will be defined by the caller of the function. After the colon, the prompt has changed from >>> to ... to show that we are inside a definition. The function is composed of all those lines with a level of indentation below the level of the def line.
So, where other programming languages use curly braces or begin/end keywords, Python uses indentation. The idea is that if you are a good programmer, you'd indent it anyway, so we'll use that indentation and make you a good programmer at the same time. Indeed, it's a controversial issue; I didn't like it at first, but I learned to live with it.
While working with the REPL, you safely can press Tab to make an indentation level, and although a Tab character can do it, using four spaces is the strongly recommended way. Many text editors know to put four spaces when you press Tab when editing a Python file. Whatever you do, never, I repeat, never, mix Tabs with spaces. In other programming languages, it may make the community dislike you, but in Python, it'll make your program fail with weird error messages.
Being practical, to reproduce what I did, simply type the class header, def add_post(blog, new_post):, press Enter, press Tab, type return blog + [new_post], press Enter, press Enter again, and that's it. Let's see the function in action:
>>> blog = add_post(blog, "Fourth post") >>> blog ['My first post', 'Python is cool', 'A new post.', 'Fourth post'] >>>
add_post takes two parameters. The first is the blog itself, and it gets assigned to blog. This is tricky. The blog inside the function is not the same as the blog outside the function. They are in different scopes. That's why the following:
>>> def add_post(blog, new_post): ... blog = blog + [new_post]
doesn't work. blog is modified only inside the function. By now, you might know that new_post contains the post passed to the function.
Our blog is growing, and it is time to see that the posts are simply strings, but we want to have a title and a body. One way to do this is to use tuples, like this:
>>> blog = [] >>> blog = add_post(blog, ("New blog", "First post")) >>> blog = add_post(blog, ("Cool", "Python is cool")) >>> blog [('New blog', 'First post'), ('Cool', 'Python and is cool')] >>>
In the first line, I reset the blog to be an empty list. Then, I added two posts. See the double parentheses? The outside parentheses are part of the function call, and the inside parentheses are the creation of a tuple.
A tuple is created by parentheses, and its members are separated by commas. They are similar to lists, but semantically, they are different. For example, you can't update the members of a tuple. Tuples are used to build some kind of structure with a fixed set of elements. Let's see a tuple outside of our blog:
>>> (1,2,3) (1, 2, 3)
Accessing each part of the posts is similar to accessing each part of the blog:
>>> blog[0][0] 'New blog' >>> blog[0][1] 'This is my first post'
This might be a good solution if we want to store only a title and a body. But, how long until we want to add the date and time, excerpts, tags or messages? You may begin thinking you'll need to hang a sheet of paper on the wall, as shown in Figure 1, to remember the index of each field—not pleasant at all. To solve this problem, and some others, Python gives us object-oriented) | http://www.linuxjournal.com/article/9277?page=0,1&quicktabs_1=0 | CC-MAIN-2015-22 | refinedweb | 735 | 71.75 |
Week 01 Tutorial Questions
Class introduction (for everyone, starting with the tutor):
- Please turn on your camera for the introduction, and for all your COMP1521 tut-labs,
if you can and you are comfortable with this.
We understand everyone can be having a difficult day, week or year so
having your webcam on is optional in online COMP1521 tut-labs,
unless you have a cute pet in which case it's required, but you only need show the pet.
- What is your preferred name (what should we call you?)
- What other courses are you doing this term
- What parts of C from COMP1511/COMP1911 were the hardest to understand?
- Do you know any good resources to help students who have forgotten their C? For example:
- Consider the following C program skeleton:
int a; char b[100]; int fun1() { int c, d; ... } double e; int fun2() { int f; static int ff; ... fun1(); ... } int g; int main(void) { char h[10]; int i; ... fun2() ... }
Now consider what happens during the execution of this program and answer the following:
Which variables are accessible from within
main()?
Which variables are accessible from within
fun2()?
Which variables are accessible from within
fun1()?
Which variables are removed when
fun1()returns?
Which variables are removed when
fun2()returns?
How long does the variable
fexist during program execution?
How long does the variable
gexist during program execution?
- Explain the differences between the properties of the variables
s1and
s2in the following program fragment:
#include <stdio.h> char *s1 = "abc"; int main(void) { char *s2 = "def"; // ... }
Where is each variable located in memory? Where are the strings located?
- C's sizeof operator is a prefix unary operator (precedes its 1 operand) - what are examples of other C unary operators?
- Why is C's sizeof operator different to other C unary & binary operators?
- Discuss errors in this code:
struct node *a = NULL: struct node *b = malloc(sizeof b); struct node *c = malloc(sizeof struct node); struct node *d = malloc(8); c = a; d.data = 42; c->data = 42;
- What is a pointer? How do pointers relate to other variables?
Consider the following small C program:
#include <stdio.h> int main(void) { int n[4] = { 42, 23, 11, 7 }; int *p; p = &n[0]; printf("%p\n", p); // prints 0x7fff00000000 printf("%lu\n", sizeof (int)); // prints 4 // what do these statements print ? n[0]++; printf("%d\n", *p); p++; printf("%p\n", p); printf("%d\n", *p); return 0; }
Assume the variable
nhas address
0x7fff00000000.
Assume
sizeof (int) == 4.
What does the program print?
Consider the following pair of variables
int x; // a variable located at address 1000 with initial value 0 int *p; // a variable located at address 2000 with initial value 0
If each of the following statements is executed in turn, starting from the above state, show the value of both variables after each statement:
p = &x;
x = 5;
*p = 3;
x = (int)p;
x = (int)&p;
p = NULL;
*p = 1;
If any of the statements would trigger an error, state what the error would be.
Consider the following C program:
#include <stdio.h> int main(void) { int nums[] = {3, 1, 4, 1, 5, 9, 2, 6, 5, 3}; for (int i = 0; i < 10; i++) { printf("%d\n", nums[i]); } return 0; }
This program uses a
for loopto print each element in the array
Rewrite this program using a recursive function
- What is a struct? What are the differences between structs and arrays?
Define a struct that might store information about a pet.
The information should include the pet's name, type of animal, age and weight.
Create a variable of this type and assign information to it to represent an axolotl named "Fluffy" of age 7 that weighs 300grams.
- Write a function that increases the age of fluffy by one and then increases its weight by the fraction of its age that has increased. The function is defined like this:
void age_fluffy(struct pet *my_pet);
e.g.: If fluffy goes from age 7 to age 8, it should end up weighing 8/7 times the amount it weighed before. You can store the weight as an int and ignore any fractions.
Show how this function can be called by passing the address of a struct variable to the function.
- Write a main function that takes command line input that fills out the fields of the pet struct. Remember that command line arguments are given to our main function as an array of strings, which means we'll need something to convert strings to numbers.
- Consider the following C program:What will happen when the above program is compiled and executed?
#include <stdio.h> int main(void) { char str[10]; str[0] = 'H'; str[1] = 'i'; printf("%s", str); return 0; }
- How do you correct the program.
For each of the following commands, describe what kind of output would be produced:
gcc -E x.c
gcc -S x.c
gcc -c x.c
gcc x.c
Use the following simple C code as an example:
#include <stdio.h> #define N 10 int main(void) { char str[N] = { 'H', 'i', '\0' }; printf("%s\n", str); return 0; }
Consider the following (working) C code to trim whitespace from both ends of a string:
// COMP1521 21T2 GDB debugging example #include <stdio.h> #include <stdlib.h> #include <string.h> #include <ctype.h> #include <assert.h> void trim(char *str); char **tokenise(char *str, char *sep); void freeTokens(char **toks); int main(int argc, char **argv) { if (argc != 2) exit(1); char *string = strdup(argv[1]); printf("Input: \"%s\"\n", string); trim(string); printf("Trimmed: \"%s\"\n", string); char **tokens = tokenise(string, " "); for (int i = 0; tokens[i] != NULL; i++) printf("tok[%d] = \"%s\"\n", i, tokens[i]); freeTokens(tokens); return 0; } // trim: remove leading/trailing spaces from a string void trim(char *str) { int first, last; first = 0; while (isspace(str[first])) first++; last = strlen(str)-1; while (isspace(str[last])) last--; int i, j = 0; for (i = first; i <= last; i++) str[j++] = str[i]; str[j] = '\0'; } // tokenise: split a string around a set of separators // create an array of separate strings // final array element contains NULL char **tokenise(char *str, char *sep) { // temp copy of string, because strtok() mangles it char *tmp; // count tokens tmp = strdup(str); int n = 0; strtok(tmp, sep); n++; while (strtok(NULL, sep) != NULL) n++; free(tmp); // allocate array for argv strings char **strings = malloc((n+1)*sizeof(char *)); assert(strings != NULL); // now tokenise and fill array tmp = strdup(str); char *next; int i = 0; next = strtok(tmp, sep); strings[i++] = strdup(next); while ((next = strtok(NULL,sep)) != NULL) strings[i++] = strdup(next); strings[i] = NULL; free(tmp); return strings; } // freeTokens: free memory associated with array of tokens void freeTokens(char **toks) { for (int i = 0; toks[i] != NULL; i++) free(toks[i]); free(toks); }
You can grab a copy of this code as trim.c.
The part that you are required to write (i.e., would not be part of the supplied code) is highlighted in the code.
Change the code to make it incorrect. Run the code, to see what errors it produces, using this command:
gcc -std=gnu99 -Wall -Werror -g -o trim trim.c ./trim " a string "
Then use GDB to identify the location where the code "goes wrong".
Revision questions
The following questions are primarily intended for revision, either this week or later in session.
Your tutor may still choose to cover some of these questions, time permitting. | https://cgi.cse.unsw.edu.au/~cs1521/21T3/tut/01/questions | CC-MAIN-2022-05 | refinedweb | 1,249 | 71.85 |
Hi Thomas,
> One comment, since the code totally replaces the transform on the
> group if it had a pre-existing transform (say from a previous move)
> at the start of the 'second' move it will snap back to it's initial
> position. To fix this you either need to remember the total
> dx/dy from the last move (I often use attributes in a custom namespace
> for this)
Thanks for your help. As per your suggestion, i will try to implement custom
namespace to store dx and dy values. If i face any problem, i will let you
know.
Thanks,
Sudhakar
--
View this message in context:
Sent from the Batik - Users forum at Nabble.com.
---------------------------------------------------------------------
To unsubscribe, e-mail: batik-users-unsubscribe@xmlgraphics.apache.org
For additional commands, e-mail: batik-users-help@xmlgraphics.apache.org | http://mail-archives.us.apache.org/mod_mbox/xmlgraphics-batik-users/200606.mbox/%3C4953093.post@talk.nabble.com%3E | CC-MAIN-2021-04 | refinedweb | 136 | 62.58 |
Details
- Type:
Bug
- Status: Closed
- Priority:
Minor
- Resolution: Fixed
- Affects Version/s: 1.2
-
- Component/s: core/search
- Labels:None
- Environment:
Operating System: All
Platform: All
-
- Lucene Fields:Patch Available
Description
According to the website's "Query Syntax" page, fuzzy searches are given a
boost of 0.2. I've found this not to be the case, and have seen situations where
exact matches have lower relevance scores than fuzzy matches.
Rather than getting a boost of 0.2, it appears that all variations on the term
are first found in the model, where dist* > 0.5.
- dist = levenshteinDistance / length of min(termlength, variantlength)
This then leads to a boolean OR search of all the variant terms, each of whose
boost is set to (dist - 0.5)*2 for that variant.
The upshot of all of this is that there are many cases where a fuzzy match will
get a higher relevance score than an exact match.
See this email for a test case to reproduce this anomalous behaviour.
Here is a candidate patch to address the issue -
- lucene-1.2\src\java\org\apache\lucene\search\FuzzyTermEnum.java Sun Jun 09
13:47:54 2002
- lucene-1.2-modified\src\java\org\apache\lucene\search\FuzzyTermEnum.java Fri
Mar 14 11:37:20 2003
***************
- 99,105 ****
}
final protected float difference(){ ! return (float)((distance - FUZZY_THRESHOLD) * SCALE_FACTOR); }
final public boolean endEnum(){ --- 99,109 ---- }
final protected float difference() {
! if (distance == 1.0)
! else
! return (float)((distance - FUZZY_THRESHOLD) * SCALE_FACTOR);
}
final public boolean endEnum() {
***************
- 111,117 ****
******************************/
public static final double FUZZY_THRESHOLD = 0.5;
! public static final double SCALE_FACTOR = 1.0f / (1.0f - FUZZY_THRESHOLD);
/**
Finds and returns the smallest of three integers
— 115,121 ----
******************************/
public static final double FUZZY_THRESHOLD = 0.5;
! public static final double SCALE_FACTOR = 0.2f * (1.0f / (1.0f -
FUZZY_THRESHOLD));
/**
Finds and returns the smallest of three integers
Issue Links
- duplicates
LUCENE-329 Fuzzy query scoring issues
- Closed
Activity
- All
- Work Log
- History
- Activity
- Transitions
I will work on massaging my test case into a JUnit test.
Meanwhile, I chose the value of 0.2 simply because it is the documented
behavior, and therefore I considered that to be the expected, even desired,
behavior. That said, it does appear to be a randomly chosen value, although not
chosen by me
Following the logic of how the scoring mechanism works (or at least my
understanding of it), this is not a universal fix, but rather as I state in my
original email on lucene-dev, it mitigates the problem. I chose the fix simply
as it brought the functionality in line with documented behavior.
The essence of the problem is the battle in scoring between levenshtein distance
and term frequency - high frequency terms are scored lower than low frequency
terms. A good example of a low frequency term is a typo in a document. If the
original correctly spelled word has a very high frequency, the misspelled word
will come out on top, due to its significantly lower frequency.
By setting the boost to 0.2, We at least make it 5 times harder (in terms of
frequency) for the misspelled item to appear ahead of the correctly spelled
item. But this clearly means that it will still happen.
--Cormac
I would suggest this is a duplicate of
The idf rating of expanded terms should be the same and not favour rarer terms. I suggest that this applies to all auto-expanding searches eg range queries.
Should we drop this bug as a duplicate?
Here is a patch, with a test for the issue.
This patch adds TOP_TERMS_CONSTANT_BOOLEAN_REWRITE to complement TOP_TERMS_SCORING_BOOLEAN_REWRITE.
Note: this solution is different than
LUCENE-329, but I think this rewrite method could be useful for other queries as well.
example usage:
FuzzyQuery query = new FuzzyQuery(new Term("field", "Lucene")); query.setRewriteMethod(MultiTermQuery.TOP_TERMS_CONSTANT_BOOLEAN_REWRITE); ScoreDoc[] hits = searcher.search(query, ...) ...
I will wait till after the code freeze and commit this in a few days if no one objects.
I don't claim its a 'best-practice' fix for fuzzy (see
LUCENE-329 for ideas on that), I just think TOP_TERMS_CONSTANT_BOOLEAN_REWRITE is a useful complement to TOP_TERMS_SCORING_BOOLEAN_REWRITE, for MultiTermQueries that want the Top-N terms expansion, but the constant score behavior of CONSTANT_BOOLEAN_REWRITE.
this patch doesnt change any defaults for fuzzy either. in fact its not specific to fuzzy at all.
uwe pointed out to me, i think there is a naming problem with TOP_TERMS_CONSTANT_BOOLEAN_REWRITE, as the entire booleanquery will not produce the same score like CONSTANT_SCORE_BOOLEAN_QUERY_REWRITE.
I think the behavior makes sense though, as it wouldnt make sense to use TOP_TERMS without per-term boosting, but we need to fix the naming... and TOP_TERMS_BOOST_BOOLEAN_REWRITE sounds confusing.
I will wait till after the code freeze and commit this in a few days if no one objects.
The code freeze only affects branches. Trunk is only frozen for fixes that should also go into branches.
The code freeze only affects branches. Trunk is only frozen for fixes that should also go into branches.
ok, well I will wait on this one anyway especially as there is a concern about naming... no rush, looks like its been open for a long time.
Attached is an updated patch:
- Synced to trunk as these PQ rewrite methods allow setting of the size
- Renamed to TopTermsBoostOnlyBooleanQueryRewrite
Please review, I think this rewrite method would also be helpful for
improving Fuzzy's junit tests: Testing that scores are correct, etc.
Patch looks good Robert!
Thanks Mike. I will commit later today if no one objects.
Committed revision 920499.
Cormac, the problem you described at seems clear.
I do not see any mention of 0.2f boost in any of the Fuzzy classes. This is a
documentation bug, which I will fix soon.
However, your fix may still be valid, as exact matches should never have lower
score than fuzzy ones. I would be very greatful if you could submit that
levtest class as a JUnit test, so we can see the bug clearly before applying
your patch.
Finally, why did you choose the boost of 0.2? Why not 0.1 or 0.3 for example?
number such as 0.2, will work for
And is it possible that choosing a random
your test document set, but may not work for some other cases?
Thank you. | https://issues.apache.org/jira/browse/LUCENE-124 | CC-MAIN-2016-50 | refinedweb | 1,049 | 65.22 |
NAME
Querylet::Output - generic output handler for Querlet::Query
VERSION
version 0.401
SYNOPSIS
DESCRIPTION
This class provides a simple way to write output handlers for Querylet, mostly by providing an import routine that will register the handler with the type-name requested by the using script.
The methods
default_type and
handler must exist, as described below.
IMPORT
Querylet::Output provides an
import method that will register the handler when the module is imported. If an argument is given, it will be used as the type name to register. Otherwise, the result of
default_type is used.
METHODS
- default_type
This method returns the name of the type for which the output handler will be registered if no override is given.
- handler
This method returns a reference to the handler, which will be used to register the handler.
AUTHOR
Ricardo SIGNES <rjbs@cpan.org>
This software is copyright (c) 2004 by Ricardo SIGNES.
This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. | https://metacpan.org/pod/Querylet::Output | CC-MAIN-2015-27 | refinedweb | 175 | 56.05 |
EJB 3 annotations to the max.
Until now I haven't come around using EJB3 in a real project, we already have Spring and Hibernate so I don't see the point. Of course also the whole EJB1 and EJB2 thing is still stuck in the back of my head. At QCon London I attended the EJB 3.0 session with LindaDeMichel and that looked like a good moment to see if I need to change my point of view on EJB's. As you would probably know already by now, the good things about EJB3 are that you can now test your EJB's without a running container and don't need to extend or implement obscure interfaces and classes anymore. But this blog is not about the features of EJB3, this blog is about the use of annotations, or should I say misuse....
First that comes around is: @PersistenceContext (well many came around, but it's impossible to remember all of them)
For instance you can say in an EJB:
@PersistenceContext EntityManager entityManager
So it is actually the EntityManager.... and not a PersistenceContext. Then why is it called @PersistenceContext? The javadoc of PersistenceContext says: "Expresses a dependency on an EntityManager persistence context." That doesn't shed much light does it?
Okay continue to the next annotation: @EJB. Let's say you have a class AccountService that needs a reference to the CustomerBean which happens to be an EJB. The CustomerBean is declared as follows:
@Stateless public class CustomerBean implements Customer.
Fair enough, I want it to be an EJB and when using annotations this is the way to do it. So @Stateless is for an EJB, but what is @EJB used for one might think? To answer this question let's have a look in the AccountService:
public class AccountService { @EJB Customer customer. }
.
Next annotation (or actually a property of an annotation) that makes my eyebrows frown is the mappedBy property of the @OneToMany. Have a look at this:
@Entity public class Order { @OneToMany(mappedBy="order") protected Set
- items = new HashSet
- (); }
The "mappedBy = order" feels so unnatural. At first I thought: "Of course it's mapped by Order, I am in the frickin' Order class". But then, after reading the javadoc, I came to the conclusion that this actually indicates that the order property in the Item class owns this relationship. But why do I have to declare this in the Order class? Is this relevant for the Order to know? I think not, I would say that this belongs in the Item class and in particular somewhere around the order property.
I can go on with the rest of the annotations, but I have had enough. Yes EJB3 is much better then the previous versions, but who came up with the annotations? Is it really so difficult to create sensible annotations?
The following blogpost also is fun to read:
It is about annotations making your domain objects unreadable. | http://blog.xebia.com/ejb-3-annotations-to-the-max/ | CC-MAIN-2017-30 | refinedweb | 491 | 63.19 |
04 October 2007 11:15 [Source: ICIS news]
LONDON (ICIS news)--BASF is facing around a two-week delay to the restart of its major Antwerp cracker in Belgium after planned expansion and maintenance work, a company source said on Thursday.
"The original plan was to start up in early October but this has been delayed until the middle of the month," said the source.
"We have not experienced major problems but this is the first maintenance done on the cracker for around eight years so there are a number of small tasks that need to be done."
Part of the work included a 280,000 tonne/year expansion project which would create ?xml:namespace>
After start-up, the cracker would initially run at the previous capacity, with rates being lifted over the course of the fourth quarter, for safety reasons.
"Mid-October would be alright, but anything later than that and it could become a major issue for European supply," replied a large buyer.
A second cracker operator said it was not surprised about the delay, given the major work in progress.
The project was one of a number planned for 2007, which will see European ethylene capacity lifted by around 600. | http://www.icis.com/Articles/2007/10/04/9064643/basf-faces-restart-delay-on-antwerp-cracker.html | CC-MAIN-2013-20 | refinedweb | 203 | 55.17 |
Java class Keyword
Example
Create a class named "Main":
public class Main { public static void main(String[] args) { System.out.println("Hello World"); } }
Definition and Usage
The
class keyword is used to create a class.
Every line of code that runs in Java must be inside a class. A class should always start with an uppercase first letter, and the name of the java file must match the class name.
A class is like an object constructor. See the example below to see how we can use it to create an object.
More Examples
Example
Create an object of
Main called "
myObj" and print the value of x:
public class Main { int x = 5; public static void main(String[] args) { Main myObj = new Main(); System.out.println(myObj.x); } }
Related Pages
Read more about classes and objects in our Java Classes/Objects Tutorial. | https://www.w3schools.com/java/ref_keyword_class.asp | CC-MAIN-2021-43 | refinedweb | 143 | 63.09 |
Ripple effect in Aerospike cluster: Production issue debugging
In Infoedge, we use Aeropsike for many distributed caching use-cases. We have multiple clusters in production. An incident happened on one of the clusters that we are going to discuss in this post.
A dependent application was giving timeout errors indicating a kind of slowness in Aerospike. On further debugging, we found out that it was a ripple effect of ssd malfunctioning on a particular node. Let’s deep dive into it.
In Naukri, Aerospike is a critical component that provides fast access for read heavy traffic. We are heavily using Aerospike and its features like multi-bin, secondary indexes to solve software business problems at scale. The reason behind choosing Aerospike is: It’s a multi-threaded distributed caching solution that claims to provide 1 millions TPS from a single node.
As it’s a cluster solution, it provides automatic data replication, which provides fault tolerance and scale both. The number of replications can be configured and controlled at namespace level via replication factor. And the cluster can operate as Available and Partition-tolerant (AP) or Consistent and Partition-tolerant (CP), and that depends on the configuration. By default Aerospike operates as AP,
- Read transaction, fetches from master
- Write transaction, write locally and synchronously writes to all replicas and then return successful.
Incident:
In Naukri, we are on SAO architecture and we have more than 200 microservices. Architecture at this level mandates monitoring and observability principles. We use ELK & Solarwinds APM tool for it.
We got timeouts from one of services, and although we have solarwinds for APM but the slowness was not detected in it. So, we used the old day practices and put profile logging via spring AOP. On analyzing it, we found this was because of slowness from Aerospike. And that’s why solarwinds was unable to detect it as it doesn’t instrument the Aerospike codebase.
So, we were getting slowness from Aerospike cluster from all the nodes. We wanted to be sure that it’s not because of the application to cluster network slowness, and all slowness is on Aerospike cluster itself. We analyzed the latencies via Aerospike asloglatency utility.
Useful commands:
Asloglatency: Aerospike by default logs histograms around reads, writes and queries. Via asloglatency we can read the histograms into latency measurements.
Command: asloglatency -l AEROSPIKE-LOG-FILENAME -h write -N NAMESPACE -f ‘Oct 14 2021 10:10:00’ -d 180 -n 12 -e 1
With default logging asloglateny gives high level information about latency of read and writes. All the 3 nodes showed higher latency on writes. To get deeper details about latency, aerospike provides the option of micro-benchmarking.
Aerospike micro-benchmarking: On enabling it, aerospike dumps end to end measurements around slices of enabled transactions. E.g. on enabling on write, it gives latency for wait time for incoming queue, de-dup resolution time, write on disk time, write on replica time (write-replication) and response return time. Via this, it’s easy to drill out the slowness root-cause.
This can be enabled via these commands:
asinfo -v ‘set-config:context=namespace;id=NAMESPACE;enable-benchmarks-write=false’
asinfo -v ‘set-config:context=namespace;id=NAMESPACE;enable-benchmarks-read=false’
asinfo -v ‘set-config:context=namespace;id=NAMESPACE;enable-benchmarks-storage=false’
Observations:
Node1
Node2:
- The cluster is of 3 nodes. And service was getting high write latency from all nodes.
- Read latency: At 17:08:32, around 4.01% requests took more than 512ms on Node1. On Node 2, all read requests were served under 64ms.
- Write-master latency: At 17:08:42, around 4.17% request took more than 512ms. On Node2, all write-master transactions were done under 32ms.
- Write-replica latency (write-repl-write): At 17:08:52, around 2.74% requests took more than 512ms. On Node2, only 0.62 requests took more than 512ms.
- IO wait on all nodes are in same ranges, and less than 1% only.
Analysis:
SSD responding slow on node 1 because of which reads & writes slowed down on node1. Writes on node2 & node3 slowed as of replication sync writing on node1, and that’s why write-replication higher side on these two nodes.
Conclusion:
One of the node SSD malfunctioned and gave latency on writes to SSD, and cluster was responding with higher latencies on writes as it synchronously writes to replicas, and this lead to ripple effect to sibling nodes on writes.
We verified it by removing the faulty node from cluster, and all write latency got resolved after it. Later, we replaced the faulty node with new node. | https://medium.com/naukri-engineering/ripple-effect-in-aerospike-cluster-production-issue-debugging-66908d11fa9b?source=read_next_recirc---------3---------------------73e2776e_0644_470b_80c0_0c8d6a53b0d2------- | CC-MAIN-2022-27 | refinedweb | 771 | 65.32 |
In this article, I will be discussing the issues with traditional and well used password hashing algorithms, specifically SHA-512 and refactoring code that uses SHA (Secure Hashing Algorithm) to upgrade and use PBKDF2, a modern and much more resilient hashing algorithm.
IntermittentBug was initially developed using SHA-512 for hashing user passwords. While the algorithm is cryptographically secure, major developments with GPU hardware mean that passwords that are hashed with an algorithm such as SHA-512 can now be cracked in a matter of seconds.
GPU Cloud PaaS Services
This is defiantly a problem. I myself have implemented for many different applications a SHA-512 system for hashing passwords. SHA is very widely implemented and any compromised system where an attacker has a dump of the database with hashed passwords with or without the Salt is at a high risk of having the passwords cracked. In the example below, I have used a Salt which makes it more difficult to crack but even a SHA-512 setup with a Salt will not be safe from a determined attacker with access to high grade GPU hardware.
With the 4 major cloud providers, Amazon Web Services, Microsoft Azure, Google Cloud and IBM Cloud all offering GPU hardware for rent by the hour using a PaaS (Platform as a Service) model, having access to serious hardware has never been easier.
For about £200 per hour of PaaS GPU computation, you could crack around 500,000,000,000 candidate passwords hashed using SHA-512 every second, that’s five hundred billion passwords a second.
Thankfully to reduce this risk, you can implement the hashing algorithm PBKDF2 an acronym for (Password-Based Key Derivation Function 2) to significantly increase the time it takes to crack passwords, even using GPU based computer hardware.
PBKDF2 is designed to be slow and difficult for GPUs to run against. This works by hashing the same password thousands of times. The result is that its takes orders of magnitude longer to crack passwords, reducing the likelihood of conducting a successful brute force attack on your user’s password hash.
You will be glad to know that it’s easy to implement and even refactor your existing security methods that use less secure algorithms to instead use PBKDF2.
I will be showing you how to use PBKDF2 using C# with example unit tests and then I will be running through the refactoring process for my security code that depends on SHA-512 and upgrading to PBKDF2.
SimpleCrypto.Net
SimpleCrypto.NET uses a Nuget Package library that abstracts out the complex cryptographic code into a simple API call. I will be using this library to demonstrate hashing passwords using PBKDF2. Let’s start off with installing the package from Nuget and creating some simple unit tests.
To begin, open the Package Manager Console in Visual Studio and enter the command
Install-Package SimpleCrypto
This installs the DLL SimpleCrypto.dll
Let’s create a unit test that will create a hash of the password “Pa55word”
[TestMethod] public void TestMethod_PBKDF2HashPassword() { ICryptoService PBKDF2 = new PBKDF2(); PBKDF2.HashIterations = 100000; // start with the plain text password string PlainTextPassword = "Pa55word"; /* this is the unique Salt for the user - if you are using individual salts per user you will need to save it to yoru Database or dependant XML config */ string salt = PBKDF2.GenerateSalt(); // This is the hash of the password stored in "PlainTextPassword" string PasswordHash = PBKDF2.Compute(PlainTextPassword, salt); // password validation - compare the generated plain text Assert.IsTrue((PBKDF2.HashedText == PasswordHash) == true); }
When running this unit test, you will notice that line 18 which computes the hash takes over 1 second. This is due to the algorithm having a default of 100,000 iterations. This can be modified if you want more or less iterations than the default. You can change this value with the following code.
ICryptoService PBKDF2 = new PBKDF2(); PBKDF2.HashIterations = 100000;
Refactor existing SHA-512 code
Let’s have a look at my security class I’m using to hash passwords using SHA-512.
public class Security { #region private fields private string _salt; private string _password; private SHA512 _shaMan; private byte[] _result; private int DataSize; #endregion #region Constructors public Security() { } public Security(string password) { // A fixed salt used for hashing. _salt = "EEh89//-*/$*6^f"; _password = password; DataSize = _password.Length + _salt.Length; _shaMan = new SHA512Managed(); } #endregion #region public methods /// <summary> /// Returns the generated hash /// </summary> /// <returns></returns> public string ComputeHash() { byte[] data = new byte[DataSize]; data = Encoding.UTF8.GetBytes(_password + _salt); _result = _shaMan.ComputeHash(data); return Convert.ToBase64String(_result); } }
This uses the built in .NET library System.Security.Cryptography developed by Microsoft. I’m also storing a master salt value for all passwords. Its more secure to have a salt value per user but for this example, ill refactor the code as it is.
This is my security class refactored to use PBKDF2. The abstraction provided by the ICryptoService interface makes the code simpler and easy to understand.
using SimpleCrypto; namespace IntermittentBug.DataObjects.Data { public class Security { #region private fields private string _salt; private string _password; private ICryptoService _PBKDF2; #endregion #region Constructors public Security() { } public Security(string password) { _PBKDF2 = new PBKDF2(); _salt = "100000.9oiBwMFtgGHyuIu2UY76Ad39ZWL/1crawCltvwaM0ZElNjA=="; _password = password; } #endregion #region public methods /// <summary> /// Returns the generated hash /// </summary> /// <returns></returns> public string ComputeHash() { return _PBKDF2.Compute(_password, _salt); } } }
To make it more secure you could have a unique salt for each user. This is normally stored in your database along with your user data, but keep in mind that if your database has been compromised the attacker will have your salt and hash.
Also keep in mind that if you have existing user accounts that have a hashed password using a different algorithm, you will have to reset their passwords for them to log in. In this instance, it would be best to email all users telling them of the upgraded security to your application and that a password reset is necessary.
So, there we have it, a simple tutorial on how to upgrade your password hashing algorithm to protect against GPU brute force computation attacks. PBKDF2 is a recognized standard so well worth upgrading your system to improve password security. | https://www.intermittentbug.com/article/articlepage/using-pbkdf2-for-password-hashing/2298 | CC-MAIN-2019-13 | refinedweb | 1,025 | 51.89 |
Pattern Matching
Control flow—deciding which code to execute—is a big part of imperative languages. Magpie has your basic looping and branching expressions that you know from most imperative languages. But instead of a boring C-style
switch, it's got something turbo-charged: pattern matching.
A
match expression evaluates a value expression. Then it looks at each of a series of cases. For each case, it tries to match the case's pattern against the value. If the pattern matches, then it evaluates the body of the case.
val fruit = "lemon" match fruit case "apple" then print("apple pie") case "lemon" then print("lemon tart") case "peach" then print("peach cobbler") end
The expression after
match (here just
fruit) is the value being matched. Each line starting with
case until the final
end is reached defines a potential branch the control flow can take. When a match expression is evaluated, it tries each case from top to bottom. The first case here doesn't match because
"apple" isn't
"lemon", so its body is skipped. The second case does match. That means we execute its body, print
"lemon tart" and we're done. Once a case has matched, the remaining cases are skipped.
Like everything in Magpie,
match expressions are expressions, not statements, so they return a value: the result of evaluating the body of the matched case. That means we can reformulate the above example like so:
val fruit = "lemon" val dessert = match fruit case "apple" then "apple pie" case "lemon" then "lemon tart" case "peach" then "peach cobbler" end print(dessert)
Or even:
val fruit = "lemon" print(match fruit case "apple" then "apple pie" case "lemon" then "lemon tart" case "peach" then "peach cobbler" end)
A case body may also be a block, as you'd expect. If it's the last case in the match, the block must end with
end, otherwise, the following
case is enough to terminate it:
match dessert case "apple pie" then print("apple") print("pie crust") print("ice cream") case "lemon tart" then // "case" here ends "apple pie" block print("lemon") print("pastry shell") end // last case block must end with "end" end // ends entire "match" expression
Case Patterns
With simple literal patterns, this doesn't look like much more than
switch statements in other languages, but Magpie allows you to use any pattern as a case. With that, you can bind variables, destructure objects, or branch based on type:
def describe(obj) match obj case b is Bool then "Bool : " + b case n is Int then "Int : " + n case s is String then "String : " + s case x is Int, y is Int then "Point " + x + ", " + y end end describe(true) // "Bool : true" describe(123) // "Int : 123" describe(3, 4) // "Point : 3, 4"
If the pattern for a case binds a variable (like
b in the first case here) that variable's scope is limited to the body of that case. That way, you can ensure that you'll only get a variable bound if it matches what you want. For example, here we know for certain that
b will only exist if
obj is a boolean and
b will be its value.
Match Failure
It's possible for no case in a match expression to match the value. If that happens, it will throw a
NoMatchError. This is the right thing to do if you only expect certain values and a failure to match is a programmatic error. If you do want to handle any possible value, though, you can add an
else case to the match expression:
val dessert = match fruit case "apple" then "apple pie" case "lemon" then "lemon tart" case "peach" then "peach cobbler" else "unknown fruit" end
If no other pattern matches, the
else case will. | http://magpie.stuffwithstuff.com/pattern-matching.html | CC-MAIN-2017-30 | refinedweb | 631 | 71.38 |
Using Switch
Let's add a page to add a user. This should be pretty easy, right? Inside
src/fe/components/Cms/index.js, let's add a new route that will render our
UserAdd component when the URL matches
/users/new.
import UserAdd from '../UserAdd'; ... <Route path="/users" component={Users} /> <Route path="/users/new" component={UserAdd} />
We have simply added a route under the route we already had in there. Load up the application in the browser and visit
/users/new.
Well...ummm...WAT? Why is the users table still showing in the background? And why is the modal up? We can see the user add form under the modal too. It displayed them all! | https://scotch.io/courses/using-react-router-4/using-switch | CC-MAIN-2018-05 | refinedweb | 116 | 78.25 |
A Python library for Eufy Security devices
Project description
python-eufy-security
This is an experimental Python library for Eufy Security devices (cameras, doorbells, etc.).
Python Versions
The library is currently supported on
- Python 3.6
- Python 3.7
- Python 3.8
Installation
pip install python-eufy-security
Account Information
Because of the way the Eufy Security private API works, an email/password combo cannot work with both the Eufy Security mobile app and this library. It is recommended to use the mobile app to create a secondary "guest" account with a separate email address and use it with this library.
Usage
Everything starts with an:
aiohttp
ClientSession:
import asyncio from aiohttp import ClientSession async def main() -> None: """Create the aiohttp session and run the example.""" async with ClientSession() as websession: # YOUR CODE HERE asyncio.get_event_loop().run_until_complete(main())
Login and get to work:
import asyncio from aiohttp import ClientSession from eufy_security import async_login async def main() -> None: """Create the aiohttp session and run the example.""" async with ClientSession() as websession: # Create an API client: api = await async_login(EUFY_EMAIL, EUFY_PASSWORD, websession) # Loop through the cameras associated with the account: for camera in api.cameras.values(): print("------------------") print("Camera Name: %s", camera.name) print("Serial Number: %s", camera.serial) print("Station Serial Number: %s", camera.station_serial) print("Last Camera Image URL: %s", camera.last_camera_image_url) print("Starting RTSP Stream") stream_url = await camera.async_start_stream() print("Stream URL: %s", stream_url) print("Stopping RTSP Stream") stream_url = await camera.async_stop_stream() asyncio.get_event_loop().run_until_complete(main())
example.py, the tests, and the source files themselves for method
signatures and more examples.
Contributing
- Check for open features/bugs or initiate a discussion on one.
- Fork the repository.
- Install the dev environment:
make init.
- Enter the virtual environment:
source ./venv/bin/activate
- Code your new feature or bug fix.
- Write a test that covers your new functionality.
- Update
README.mdwith any new documentation.
- Run tests and ensure 100% code coverage:
make coverage
- Ensure you have no linting errors:
make lint
- Ensure you have typed your code correctly:
make typing
- Submit a pull request!
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/python-eufy-security-rik/ | CC-MAIN-2021-39 | refinedweb | 371 | 50.43 |
This phase performs the necessary rewritings to eliminate classes and methods nested in other methods. In detail: 1. It adds all free variables of local functions as additional parameters (proxies). 2. It rebinds references to free variables to the corresponding proxies, 3. It lifts all local functions and classes out as far as possible, but at least to the enclosing class. 4. It stores free variables of non-trait classes as additional fields of the class. The fields serve as proxies for methods in the class, which avoids the need of passing additional parameters to these methods.
A particularly tricky case are local traits. These cannot store free variables as field proxies, because LambdaLift runs after Mixin, so the fields cannot be expanded anymore. Instead, methods of local traits get free variables of the trait as additional proxy parameters. The difference between local classes and local traits is illustrated by the two rewritings below.
def f(x: Int) = { def f(x: Int) = new C(x).f2 class C { ==> class C(x$1: Int) { def f2 = x def f2 = x$1 } } new C().f2 }
def f(x: Int) = { def f(x: Int) = new C().f2(x) trait T { ==> trait T def f2 = x def f2(x$1: Int) = x$1 } } class C extends T class C extends T new C().f2 }
Constructors
the following two members override abstract members in Transform
the following two members override abstract members in Transform
If set, allow missing or superfluous arguments in applications and type applications.
If set, allow missing or superfluous arguments in applications and type applications. | http://dotty.epfl.ch/api/dotty/tools/dotc/transform/LambdaLift.html | CC-MAIN-2017-13 | refinedweb | 263 | 65.22 |
New answers tagged iscsi
2
VMDKs are performing very good and I haven’t seen almost any difference between VMDK and RDM. Of course, you have to pay attention to the type of provisioning since Eager Zero vs Lazy Zero VMDKs performance differs a lot If you need some virtualized storage shared or mirrored between hosts you can use an ...
0
RDM lets the storage array do things like do array level snapshots of volumes. Maybe only useful for large databases, but it is still in use. Its a question of functionality, whether you use more storage features at the virtualization level or the array level.
4
There's no much point in using RDM these days unless your VM is sort of a "controller" VM and it uses ZFS or similar to aggregate multiple individual physical disks into single unified namespace. Think about Nutanix for example. For all other cases VMDK is just as well I/O performing as RDM but RDM-x doesn't support at least some nice features and has issues ...
0
What I read and what I understand suggest to me that in either case you're likely to end up with corruption unless you're using a cluster-aware file system such as CSVFS. I'm not a Synology user, but I do work with iSCSI and Failover Clusters. I would say that in your case, allowing multiple sessions will be the most appropriate option. Adding a new target ...
Top 50 recent answers are included | http://serverfault.com/tags/iscsi/new | CC-MAIN-2016-22 | refinedweb | 251 | 61.06 |
Example: C++ Program to Reverse an Integer
#include <iostream> using namespace std; int main() { int n, reversed_number = 0, remainder; cout << "Enter an integer: "; cin >> n; while(n != 0) { remainder = n % 10; reversed_number = reversed_number * 10 + remainder; n /= 10; } cout << "Reversed Number = " << reversed_number; return 0; }
Output
Enter an integer: 2345 Reversed number = 5432
This program takes an integer input from the user and stores it in variable n.
Then the while loop is iterated until
n != 0 is false.
In each iteration, the remainder when the value of n is divided by 10 is calculated, reversed_number is computed and the value of n is decreased 10 fold.
Let us see this process in greater detail:
Finally, reversed_number (which contains the reversed number) is printed on the screen. | https://www.programiz.com/cpp-programming/examples/reverse-number | CC-MAIN-2022-40 | refinedweb | 125 | 50.46 |
By Aaron Weiss, WDVL.com
Mod_perl, the module that makes for a happy but complex marriage
between Perl and the Apache web server, can ultimately offer
significant performance improvements in Perl-backed web sites.
Perl is
a powerful and flexible as a backend language for web developers,
as the Perl You Need to
Know series no doubt illustrates. However, serving many pages
which rely on Perl processing can come at a cost in memory and
time. This month we introduce the wonders of mod_perl, an Apache module which integrates Perl
into the Apache web server. We'll begin by discussing the reasoning
behind mod_perl and its uses, pros, and cons, and in follow-up
articles delve into some code-specific issues when working in a
mod_perl environment. Readers should already be familiar with the
Perl covered in the Perl You Need to Know series -- furthermore,
you'll need hands on access to your own Apache server to employ
mod_perl.
Apache, as you may know, is a very popular web server. So
popular, in fact, that as of March 2000 Apache is believed to power
some 60% of web sites on the Internet -- and, thank goodness for
open source, it's free to boot. What an age to be alive! A web
server would be an extremely simple thing if your site only ever
attracted a single visitor at a time. With 6 billion people on this
planet, that's rather unlikely. Instead, the web server must juggle
and serve a number of suitors simultaneously, not unlike a harried
waitress scurrying between restaurant patrons. Web servers in
general employ one of several schemes for handling incoming
requests, some schemes more efficient than others. Apache, in its
current 1.x incarnation, is what they call a pre-forking
server. This does not mean Apache is older than silverware ("the
time before forks"). Rather, it means that the parent Apache
process "spawns" (like a demon) a number of children processes who
lie in wait anticipating an incoming connection. When a request
comes in, and one child is busy, another child handles that
request. If all the children are busy, Apache may birth more
children depending on the server's configuration, or -- when the
maximum number of children are born -- additional requests are
queued and the end user must wait for the next available child
process.
Each child spawned takes up space and resources -- namely,
memory and possibly processing time (depending on what it's doing).
Ideally, Apache keeps just enough children alive to handle incoming
requests. If additional children must be spawned to handle a surge
of requests, Apache will ruthlessly kill them lest they lie around
forever idle, simply consuming resources. The world of Apache is a
brutal place.
How does all of this relate to Perl? A connection request
arrives at an Apache child process, and requests, for example, a
CGI script. The CGI
process occurs external to the Apache child, which means that the
child must fork a new process to launch the CGI script. In
the case of a CGI coded in Perl, the Perl interpreter must be
launched since Perl is not a compiled language. The interpreter is
launched as a separate process, it compiles and executes the Perl
code, and returns the results to the Apache child, who then passes
them along to the visitor. Works great, except for two problems:
it's slow, since the Perl script has to be re-interpreted every
time it is run, and it consumes even more memory, because the Perl
interpreter must be launched for each execution of the Perl
script.
The above describes your standard garden variety CGI
environment. For sites with low traffic and/or low processing
demands, CGI is easy to implement and the costs are still
reasonable (keep in mind that "slow" in computer terms is still
very, very fast in human terms).Where the CGI model begins to break
down is with sites that must process more than several simultaneous
requests for Perl scripts, and those scripts perform a variety of
activities such as database queries. A web site with these needs
will quickly become bogged down by the sheer inefficiency of CGI,
wasting memory and leaving visitors frustrated with noticeable wait
times.
One sunny (or cloudy, we just don't know) day, a bright fellow
named Doug MacEachern resolved to marry Perl and Apache, so that
rather than interacting as two foreign independent entities, the
two would be joined in holy matrimony, with the advantages and both
combined in union, able to tackle the world till obsolescence do
they part. With a knack for hacking, but perhaps not such a gift
for names, Doug names his new hybrid mod_perl. Put more
accurately, mod_perl is an Apache module that integrates
the Perl interpreter into the Apache web server.
The benefits of this integration are twofold:
Most sites that run Apache are based on Unix-like operating systems
such as Linux or FreeBSD, although Apache is also available for the
Windows platforms. You will need to be running an Apache web
server, preferably the newest stable release available (1.3.12 at
the time of writing) to make use of mod_perl, although there are
plug-ins similar to mod_perl for other web servers (nsapi_perl for Netscape
servers, or the commercial PerlEx by ActiveState for
O'Reilly, Microsoft, and Netscape servers).
On Apache under a Unix-like operating system, you can download
the source
for mod_perl (current version is 1.22). Alternatively, if you
are familiar with the CPAN.pm module the command
install Bundle::Apache will install mod_perl and several
related Perl modules that you may or may not wish to use. You can
also install mod_perl manually, from the source link above, and
then type perldoc Bundle::Apache to view a list of related
modules that you can retrieve and install if you wish.
Apache is also available for Windows, but many Windows Perl
coders use ActiveState's popular port, ActivePerl. This is a
problem for us here, because mod_perl will not (yet) work under
Windows with ActivePerl. There is hope -- you can freely
download a fully bundled set of binaries containing Apache,
mod_perl, and an alternate port of Perl all for Windows
95/98/NT.
While Windows users have downloaded binaries, many Unix-like
users have downloaded source code. The vagaries of compiling
anything under a Unix environment are complex, but in a typical
scenario you can rely on the built-in compilation scripts included
with Apache and mod_perl. The compilation procedure involves
building of mod_perl first, which then in turn builds the Apache
binaries -- the end result will be a new Apache httpd binary. The
installation summary below is reproduced from
Stas Bekman's thorough "mod_perl Guide" -- you can skip the
first five lines if you've already downloaded and unpacked the
Apache and mod_perl sources (which is what these lines do).
%
As illustrated, you simply need to unpack the Apache and
mod_perl sources into respective subdirectories, then change into
the mod_perl source directory and execute the "perl Makefile.PL"
command illustrated above. This tells the compiler where to find
the Apache sources and what options to build in -- the above
routine defaults to "everything" which is satisfactory for most
uses and certainly a first time experience. Finally, the sources
are all built while your computer churns and smokes for a few
minutes, and installed into place, typically
/usr/local/apache.
Assuming a /usr/local/apache destination, the new httpd
(the binary for the Apache server) will be found in
/usr/local/apache/bin.
If you've previously compiled an Apache server you may have
noticed that the typical httpd size is between 300-400K. Now, with
mod_perl integrated, the httpd has ballooned to over 1 megabyte.
Perl is, you can see, as William Shatner would shill, "big! really
big!". This brings us to the subject of tradeoffs.
Life is a box of compromises. Buffalo wings and cheesecake are a
swell meal, but make you fatter. Chicken broth and celery stalks
are slimming and dull. And so it is with the Apache web server,
which is much more robust with a belly full of Perl. The trouble in
the henhouse is that Apache, as we discussed, is pre-forking --
which means that a fat parent server will spawn fat children.
Several of them. Isn't that always the way. That's the cost of
doing business when you want to execute heavy Perl scripts with
aplomb, but most web sites are composed of more than simply Perl
scripts -- such as static web pages. And a static web page is like
a sheet of paper, lightweight. Unfortunately, if your site is
running mod_perl and has many static pages to serve in addition to
Perl scripts, that is one fat child process running around carrying
a tiny load.
So it's a battle of inefficiencies: vanilla Apache is
inefficient at executing Perl scripts via CGI, while mod_perl
beefed up Apache is inefficient at serving simple web pages. You
need to consider the general breakdown of pages served by your site
-- are we looking at 90% Perl scripts vs. 10% simple pages, or 10%
Perl scripts vs. 90% simple pages? Likely somewhere in between. At
the extremes, your best choice is to choose the most efficient
server for most of the time. In a scenario where 10% of your
requests trigger Perl scripts, it might be justifiable to live with
the relative penalty of CGI for the benefit of a small and compact
server process, allowing for more simultaneous visitors in a given
amount of memory. If you serve relatively few simple pages, the
advantages of a beefy mod_perl server will pay off more than the
penalty of a few extra though large processes. Many readers find
themselves somewhere between these two poles, though -- say, 30/70
or 40/60 or 50/50.
A nifty solution to this quandary is to run two Apache servers.
One Apache server is the small, compact vanilla version while the
other is the robust and hefty mod_perl enabled Apache server.
Incoming requests are then routed to the mod_perl server when Perl
scripts are required, while simple page requests are handled by the
lightweight server. Elegant enough, but the devil is in the
details. Ultimately, this is the preferred solution when you can't
justify serving all content from either a slim or fat Apache server
but it has its own pitfalls. You'll need to maintain two separate
installation trees for each Apache server, including separate
configuration files, and each server will spit out separate log
files, making the job of analyzing traffic a bit more complicated.
The mod_perl server is typically configured to listen on an
alternate network port, such as 8080, but you don't want end users
to see this -- all pages should appear to come from one server lest
problems arise with firewalls, bookmarks, and so on. This is solved
by employing internal proxying within the slim Apache server's
configuration file, to redirect requests for Perl scripts to the
mod_perl server "behind the scenes". That's the short of it -- the
long is simply too long and too off-topic for this article, but we
again direct you to Stas Bekman's thorough coverage of
multiple server arrangements.
For the sake of simplicity in this introduction, we'll assume a
single Apache server which is mod_perl enabled, even if this is not
the ideal architecture for sites with lots of static content. The
Apache server is configured, prior to launch, in the very long but
well commented httpd.conf file which, in a default
installation, is found in /usr/local/apache/conf
subdirectory. Once again, and not to pass the buck too often,
Apache server configuration is a career unto
itself, so we will focus only on configuration of the mod_perl
aspect.
Simply put, we want to tell Apache to process Perl scripts via
the Apache::Registry module, which is mod_perl's pseudo-CGI
environment. This allows us to run Perl scripts written for a
typical CGI environment (such as using the CGI.pm module) under
mod_perl, which is technically not a CGI extension.
The default httpd.conf file installed with Apache is
not configured to use mod_perl; instead, it is configured to
execute scripts via CGI. You will probably find a configuration
directive in your httpd.conf file that looks something
like:
ScriptAlias /cgi-bin/ "/usr/local/apache/cgi-bin/"
This directive tells Apache that any files in the relative path
/cgi-bin/ should be considered scripts, and launched
accordingly. You need to consider whether all scripts on your web
site will be Perl and handled by mod_perl, or whether there are
other scripts that may still need to execute via CGI. The safest
approach is to retain at least one subdirectory for traditional
old-style CGI scripts and one subdirectory for your mod_perl Perl
scripts. The ScriptAlias directive above must
only point to a path with CGI scripts, and not to
the path where you want Perl scripts executed from. Let's say,
then, that you create a new path --
/usr/local/apache/cgi-perl/ for your mod_perl enabled
scripts.
Of course, if you are running mod_perl scripts exclusively, you
could simply comment out the ScriptAlias directive by
preceding it with a pound symbol (#), and simply use the
cgi-bin/ path for your Perl scripts.
Now we're ready to add mod_perl specific configuration
directives. If you scroll through the httpd.conf file,
you'll find a section which contents the commented heading
"Aliases: Add here as many aliases as you need ...". It's easiest
to scroll down towards the end of this section, just before it is
closed with the tag, and add our new alias here.
Alias /cgi-perl/ "/usr/local/apache/cgi-perl/"
SetHandler perl-script
PerlHandler Apache::Registry
Options ExecCGI
PerlSendHeader On
Above, we define an alias, linking /cgi-perl/ to the
system path /usr/local/apache/cgi-perl/. The directive
references this alias and defines a number of attributes for it.
First, we tell Apache to let mod_perl handle these files via the
SetHandler directive, and we tell mod_perl to handle them
using its Apache::Registry module. The Registry module is
basically the star of the show here, as it is what handles
emulating a CGI environment and compiling/caching the Perl code. We
tell Apache to handle these files as executable via the
ExecCGI parameter, otherwise the browser would try to send
the script as a text file to the end user -- yikes!. Finally, we
tell Apache to send an HTTP header to the browser on script
execution -- this is not strictly necessary if your Perl script is
well behaved about sending the header itself, such as by the
CGI->header() method of the CGI.pm module.
Our mod_perl Apache server is ready to serve. That's the good
news. But, like any high performance piece of machinery, mod_perl
is not going to provide its optimum benefits right out of the box
like this. Before you're ready to tweak and tune, however, it's
important to get used to developing scripts in the mod_perl
environment (and for better or worse, there is a lot of
tweaking and tuning that can be done under the hood). Of course,
you'll want to save your Perl scripts to the system directory
aliased to /cgi-perl/ or whatever name you chose. Whether
you are adapting existing scripts or writing anew, your Perl should
interact with the browser just as you did before, via the
CGI.pm module, which we looked at way back in Part 2 of
the Perl You Need to Know. You can retrieve parameters and send
output to the browser just as before, but keep in mind that
although we continue to use the label "CGI" as a manner of
speaking, scripts executed by mod_perl are not technically using
the CGI extension.
Although many Perl scripts will run as-is in the mod_perl
environment, you are not yet taking full advantage of mod_perl's
benefits. We'll close out this month's installment looking at
pre-loading Perl modules. Next month we'll look some more at
optimizations, and also at some thorny pitfalls in coding practice
that could undermine Perl scripts that otherwise work fine outside
of mod_perl.
Your Perl scripts most probably begin by linking in some modules
via the use() statement. At the least, you probably:
#!/usr/bin/perl
use CGI;
Because your script invocations will likely keep using many of
the same modules, one mod_perl optimization is to pre-load these
modules, allowing mod_perl to compile them once and keep them
resident in memory. Future script executions do not then need to
recompile these modules, shaving a few more milliseconds off total
execution time. The typical way you can pre-load Perl modules is
with the PerlModule directive, which you can place in
Apache's httpd.conf file along with your other mod_perl
directives:
Alias /cgi-perl/
"/usr/local/apache/cgi-perl/"
PerlModule CGI
SetHandler perl-script
PerlHandler Apache::Registry
Options ExecCGI
PerlSendHeader On
You can list any other Perl modules you wish to pre-load in the
one PerlModule directive, simply separated by spaces.
There is a slightly more sophisticated method of pre-loading
modules that involves using the PerlRequire directive to
load a short script that contains "use ()" statements for each
module -- this is not a necessary step to begin with, but is nicely
illustrated in Vivek Khera's mod_perl_tuning
document.
Just because you've pre-loaded a Perl module does
not mean that you forego the "use ()" statement in
your Perl script. Leave those in as they are. Perl will not waste
time recompiling the module sources, but it will import necessary
elements of the module into your script's namespace, allowing you
to leave calls to the module unchanged in syntax within your
script.
It is tempting and simple to walk away from an introduction to
mod_perl thinking that it magically takes care of all
optimizations. The magical mod_perl genie just compiles your Perl
and everything is milk and honey. Not so fast! The ways in which
mod_perl compiles and caches code varies depending on how it is
used -- before we become immersed in details next month, go to
sleep tonight with a good overview of the ways in which mod_perl
can optimize Perl execution.
"Better Than Nothing" Optimization:.
"Hands Off" Optimization:.
"Typical" Optimization:.
Related Stories:
Please enable Javascript in your browser, before you post the comment! Now Javascript is disabled. | http://www.linuxtoday.com/infrastructure/2000041100704NWHL | CC-MAIN-2016-36 | refinedweb | 3,093 | 58.92 |
would you please help me to run LTC2414 . i am a beginner in the case of arduino programming.
Do you think it's wise to start with Arduino programming using a chip that doesn't have library support on the Arduino platform? If you put a yes behind this, start with your project, write some code and post it here if you run into problems. This forum is not meant to be a programming service for free.
A newbie with a 16 channel 24 bit ADC. That’s an interesting combination
For informed help, please post a link to the board design you are using, or if your own design, post schematics and a photo of the board.
You are, of course, aware of the LTC2418 demo board and companion Arduino board that is already available.
Hi, Welcome to the forum.
Please read the first post in any forum entitled how to use this forum.
Can you post a link to data/specs of LTC2414?
Can you tell us your electronics, programming, Arduino, hardware experience?
Thanks.. Tom... :)
Hi
I will put my schematic of arduino board and program i have used here.
i want to use two channels of LTC2414. but with the below code…output data is a fixed…in variable input, no changes happens in output…whats the problem?
#include <stdint.h> #include <SPI.h> // LTC2418 single-ended configuration bytes // these are the binary commands converted to hex to be sent through MISO #define LTC2418_CH0 0xB0 #define LTC2418_CH1 0xB8 #define LTC2418_CH2 0xB1 #define LTC2418_CH3 0xB9 #define LTC2418_CH4 0xB2 #define LTC2418_CH5 0xBA #define LTC2418_CH6 0xB3 #define LTC2418_CH7 0xBB #define LTC2418_CH8 0xB4 #define LTC2418_CH9 0xBC #define LTC2418_CH10 0xB5 #define LTC2418_CH11 0xBD #define LTC2418_CH12 0xB6 #define LTC2418_CH13 0xBE #define LTC2418_CH14 0xB7 #define LTC2418_CH15 0xBF /*macros*/ /********************************************************/ // set pin low #define output_low(pin) digitalWrite(pin, LOW) // set pin high #define output_high(pin) digitalWrite(pin, HIGH) // return the state of pin #define input(pin) digitalRead(pin) /********************************************************/ #define CS_multiplex 10 #define CS_accel 9 #define CS_mag 8 // constants // constants defined within the LTC2418.h file are hex values corresponding to commands (when converted to binary) for reading each channel in single-ended mode const uint8_t BUILD_COMMAND_SINGLE_ENDED[16] = {LTC2418_CH0, LTC2418_CH1, LTC2418_CH2, LTC2418_CH3, LTC2418_CH4, LTC2418_CH5, LTC2418_CH6, LTC2418_CH7, LTC2418_CH8, LTC2418_CH9, LTC2418_CH10, LTC2418_CH11, LTC2418_CH12, LTC2418_CH13, LTC2418_CH14, LTC2418_CH15 }; const uint16_t MISO_TIMEOUT = 1000; // the MISO timeout (ms) // function prototypes int8_t LTC2418_EOC_timeout(uint8_t cs, uint16_t miso_timeout); // check LTC2418 to see if end of conversion void LTC2418_single_read_raw(uint8_t cs, uint8_t adc_command, uint32_t *adc_code); // read single channel raw data void setup() { // set chip select pins as output pins pinMode(CS_multiplex, OUTPUT); pinMode(CS_accel, OUTPUT); pinMode(CS_mag, OUTPUT); SPI.begin(); SPI.beginTransaction(SPISettings(4000000, MSBFIRST, SPI_MODE0)); output_high(CS_multiplex); // pulls DAQ multiplexer chip select high output_high(CS_accel); // pulls accelerometer chip select high output_high(CS_mag); // pulls magnometer chip select high Serial.begin(115200); // begin serial port with specified baud rate } void loop() { while (1) { uint32_t raw_output; uint8_t tx_command; tx_command = BUILD_COMMAND_SINGLE_ENDED[8]; // build ADC command Serial.print("\nReading channel 8...\n"); if (LTC2418_EOC_timeout(CS_multiplex, MISO_TIMEOUT == 1)) // check for EOC { Serial.print("Wait for EOC timed out\n"); } LTC2418_single_read_raw(CS_multiplex, tx_command, &raw_output); Serial.print("RAW TX: "); Serial.print(tx_command, HEX); Serial.print("\n"); Serial.print("RAW RX: "); Serial.print(raw_output, HEX); Serial.print("\n"); delay(2000); } } // function definitions // Checks for EOC with a specified timeout int8_t LTC2418_EOC_timeout(uint8_t cs, uint16_t miso_timeout) { uint16_t timer_count = 0; // Timer count for MISO output_low(cs); //! 1) Pull CS low while (1) //! 2) Wait for SDO (MISO) to go low { if (input(MISO) == 0) { break; //! 3) If SDO is low, break loop } if (timer_count ++ > miso_timeout) // If timeout, return 1 (failure) { output_high(cs); // Pull CS high return(1); } else delay(1); } return(0); } // Reads single channel raw data on LTC2418 void LTC2418_single_read_raw(uint8_t cs, uint8_t adc_command, uint32_t *adc_code) { union union_int32_4bytes { uint32_t LT_uint32; // 32-bit unsigned integer int32_t LT_int32; // 32-bit signed integer uint8_t LT_byte[4]; // 4 bytes unsigned (4 * 8-bit integers) } data, command; command.LT_byte[0] = adc_command; command.LT_byte[1] = 0; command.LT_byte[2] = 0; command.LT_byte[3] = 0; output_low(cs); //! 1) Pull CS low data.LT_byte[0] = SPI.transfer(command.LT_byte[0]); data.LT_byte[1] = SPI.transfer(command.LT_byte[1]); data.LT_byte[2] = SPI.transfer(command.LT_byte[2]); data.LT_byte[3] = SPI.transfer(command.LT_byte[3]); output_high(cs); //! 3) Pull CS high Serial.print("TX:\n"); Serial.print("LT_byte[0] = "); Serial.print(command.LT_byte[0], BIN); Serial.print("\t"); Serial.print("LT_byte[1] = "); Serial.print(command.LT_byte[1], BIN); Serial.print("\t"); Serial.print("LT_byte[2] = "); Serial.print(command.LT_byte[2], BIN); Serial.print("\t"); Serial.print("LT_byte[3] = "); Serial.print(command.LT_byte[3], BIN); Serial.print("\t"); Serial.print("\n"); Serial.print("RX:\n"); Serial.print("LT_byte[0] = "); Serial.print(data.LT_byte[0], BIN); Serial.print("\t"); Serial.print("LT_byte[1] = "); Serial.print(data.LT_byte[1], BIN); Serial.print("\t"); Serial.print("LT_byte[2] = "); Serial.print(data.LT_byte[2], BIN); Serial.print("\t"); Serial.print("LT_byte[3] = "); Serial.print(data.LT_byte[3], BIN); Serial.print("\t"); Serial.print("\n"); *adc_code = data.LT_int32; }
Hi,
OPs images.
Tom... :)
Hi,
The LT1021 is a voltage reference NOT a power supply regulator, it is only rated to 10mA output current.
Have you a DMM to measure the 5V supply to the UNO and 2414?
I don’t think you can leave pin19 on the 2414 open circuit?
What is your 9V power supply?
Thanks… Tom…
241418fb.pdf (871 KB)
ali_elect62: whats the problem?
Have you studied the datasheet ?
All pins connected as described ?
hi
thanks for answers
i make some changes in circuit. you can see in attachment .but output data still is fixed.
what about my program?is it correct?
hi
whit above code the output data varies...for 5v the output is 0.0043021v....& for 3.3v is -1.545451.....& for 0v is -2.5v
above program has just one channel (ch0),now if i want to have 2 channels(ch0 & ch1) whit 0 to 16777216 output data ,what should i do? thanks
what should I do
Stop ignoring our questions and explanations about exsisting hardware problems.
It has been pointed out that the connections to the LTC1021 are incorrect. You were asked in post #7 to measure voltages that would confirm this.
You continue to ignore these facts and questions.
To be absolutely clear: the output of the LTC1021 is connected to one point only, the REF input of the ADC. The LTC2414 VCC pin should be connected to the 5 volt output of the Uno.
Hello Sorry i can not write english well.maybe it makes some misconceive. You can see in schematic that i use separatly 9 VDC power suply for the LT1021 input. I did what you said" the output of the LT1021 is connected to one point only." Now i want to activate CH0 & CH1, in whitch 0 to 16777216 output became available, then i can check this new changes in the circuit with results.
Hi,
Have you wired the 5V to the 2414 like this?
Have you got any thing connected to the 2414 ch0 ns ch1?
I would suggest you use two potentiometers so you can control the ch0 and ch1 inputs.
Tom…
Have you wired the 5V to the 2414 like this?
Yes
Have you got any thing connected to the 2414 ch0 ns ch1?
Tow potentiometeres 10k ohm are conected to ch 0 and ch1. But above code read just one channel(ch0). What should i do , to read tow channels?
Hi,?
Thanks.. Tom.. :)?
For 0v RAW TX: B4 RAW RX: 1503D8A1 VOLTAGE: -1.716403V
For 1v RAW TX: B4 RAW RX: 19681DA0 VOLTAGE: -1.030203V
For 2v RAW TX: B4 RAW RX: 1DBFCDA0 VOLTAGE: -0.351683V
For 3v RAW TX: B4 RAW RX: 22181721 VOLTAGE: 0.327203V
For 4v RAW TX: B4 RAW RX: 2654DFE0 VOLTAGE: 0.989303V
For 5v RAW TX: B4 RAW RX: 2ABBE761 VOLTAGE: 1.677187V
Appears to be working but with offset.
What DC voltage do you measure between pins 11 and 12 of the LTC2414 (REF+ and REF-) ?
Hi
What DC voltage do you measure between pins 11 and 12 of the LTC2414 (REF+ and REF-) ?
Its 5V
I think there are some problems in the code....!
:( | https://forum.arduino.cc/t/help-me-for-ltc2414/490327 | CC-MAIN-2021-21 | refinedweb | 1,356 | 58.99 |
Answered by:
Vista: Unable to start debugging on the web server. IIS does not list an application that matches the launched URL.
Hello,
I am working with Vista Business, Visual Studio 2005 SP 1, IIS7. Last week I had to install the Hotfix KB ID 937523 because I was unable to Debug anything (see).
Now when debugging with VS2005 I get the error "Unable to start debugging on the web server. IIS does not list an application that matches the launched URL." This happens when I want to debug a ASPX Page in a sub directory from my project.
Thanks in advance for any help.
Joerg
Question
Answers
All replies
Hi,
We get the same error described in the original post using VS 2005 with the hotfix ID 937523 applied, ASP.NET 2.0, and Windows Vista Ultimate (therefore using IIS 7.0).
Enabling Windows Authentication is not an option for us, since our web application uses Forms Authentication. We technically can move the web pages down to the root directory, but that defeats the purpose of having nice folder/namespace separation we had setup for our application, therefore this probably will not be an option for us either.
Please note, that we are able to perform debugging by browsing to any page (so that the application starts) and then attaching to the process via VS 2005. Pressing F5 to start debugging from VS 2005 is just a lot more convenient (i.e. less steps) and enables us to debug the Page_Load event for the first page in our application (currently the Login page).
Kind Regards,
Jamie
Hi,
It looks like the Hotfix KB937523 still left some issues open. In my case, installing this Hotfix even did not change any thing on Windows Vista Home Premium. Windows Authentication was still invisible in IIS. I am wondering what the installation sequence that need to follow to make debugging enabled. I had installed VS2005 SP1, and VS2005 SP1 update for Windows Vista, VS2005 Extension for 3.0, and SDK. Does the installation sequence order affected the Hotfix not work?
Thanks for any help,
Mitch
I just had this issue, I went into the propertyy pages fo the web site in Visual Studio and in Start Options set it to start on a specfic page (default.aspx in my case, it was set to use the current page) and it works fine now.
Hope this helps someone else too?!
MS have truly made a right mess of this with Vista!
So, to clarify my own experience:
* Install VS 2005 SP1
* Install VS 2005 SP1 Hotfix for Vista
* Install the aforementioned hotfix to prevent the Windows authentication issue
The issue then appears when pointing to any page below the root of the application.
I have the same issue on Vista Business after installing KB937523 hot fix (to fix the other error - An authentication error occurred while communicating with the web server).
The workaround that works for me is start debugging without having any breakpoints, navigate to the page you need to debugg and after that put breakpoints on this page. Debugger enters the page needed.
There are two workaround I can find.hope to help you.
1. change the default start page to make sure it is at the root direction of the application.
2. change the application pool's [manage pipeline mode] to classic, which can be set by right click applicantion pool then select [basac setting]
good luck!
Thank you very much .. that did the trick for me
/Johan
- no, no solution found here works for me. Windows Server 2008, VS2008. still the same problem.
- Proposed as answer by Dennis Lindkvist Thursday, April 30, 2009 8:34 AM | https://social.msdn.microsoft.com/Forums/vstudio/en-US/04532b6d-e281-4b7f-a623-543dcb683e2f/vista-unable-to-start-debugging-on-the-web-server-iis-does-not-list-an-application-that-matches?forum=vsdebug | CC-MAIN-2015-11 | refinedweb | 616 | 72.66 |
Related
Tutorial
Add i18n Yourself in a React and Redux thing that surprised me at first with React is how everything is a component. That may sound simple on the surface, but for the functional stuff (say form management, i18n (internationalization), routing, state…) it just doesn’t feel very natural to me and adds component layers for non-visual functional-only components.
Usually other UI libs/frameworks have an API-based way to deal with functional logic: a Router API, a Form API, etc. They provide a way to extend their own API or instance for those cases.
When looking into i18n solutions for React, I found that most solutions use translation components. They’re great, especially for large projects when you need more functionality and features, but my use case was quite simple, so I decided to do it myself. In this post I’ll describe how I did it.
If you’re interested in trying out one of the i18n libraries for React, check out this post covering i18next and react-i18next.
The Literals Reducer
The solution I’ll show you uses Redux since it’s already a great state container, but you could build a container for the literals yourself if you’d like, the mechanics are similar.
First, let’s create a store/literals.js as the Redux piece of state to store the literals:
const defaultState = {}; const LOAD_LITERALS = "LOAD_LITERALS"; export default (state = defaultState, { type, payload }) => { switch (type) { case LOAD_LITERALS: return payload; default: return state; } }; export const loadLiterals = literals => ({ type: LOAD_LITERALS, payload: literals, });
Nothing special here if you’re already familiar with Redux. We just have a reducer that replaces the whole state of literals and a
loadLiterals action creator to set them.
In a React/Redux app, the setup usually starts with the index.js file using React Redux’s Provider component:
import React from "react"; import ReactDOM from "react-dom"; import { Provider } from "react-redux"; import App from "./App"; import store from "./store"; ReactDOM.render( <Provider store={store}> <App /> </Provider>, document.getElementById("root") );
Here,
store comes from a file where you create the Redux store, passing the root reducers to a function like
combineReducers:
import { createStore, combineReducers } from "redux"; import literals from "./literals.js"; const rootReducer = combineReducers({ literals, // other reducers... }); export createStore(rootReducer);
Nothing special yet, just the usual steps to create a Redux store.
Let’s create a folder with the following structure to organize your i18n logic:
+ i18n - index.js - en.json - es.json ....
The JSON files just have key-value data with the language literals:
{ "app_greet": "Hey Joe!" }
As for the index.js file, we can expose a function that returns the literals for a given language:
import en from "./en.json"; import es from "./es.json"; const langs = { en, es }; export default function (lang = "en") { return langs[lang]; };
The idea is to load the literals early in your app. You probably have to initialize and configure other stuff as well at the same time. I sometimes create a init.js file for that, but just do that as you want.
However you must be sure that the store is already created. Let’s just do it in index.js, right after creating the store:
import React from "react"; import ReactDOM from "react-dom"; import { Provider } from "react-redux"; import App from "./App"; import { loadLiterals } from "store/literals"; import store from "./store"; import loadLang from "./i18n"; const lang = loadLang(); store.dispatch(loadLiterals(lang)) ReactDOM.render( <App /> , document.getElementById("root") );
As you can see, we’re loading them by calling the
loadLiterals function. We can call Redux actions from outside the components by using the
store.dispatch instance method.
That should be enough to have your literals loaded. Then, in any component, you could just get your piece of the store using the connect function. Here’s a basic example:
import React from "react"; import { connect } from "react-redux"; const App = ({ literals }) => ( <div> {literals.app_greet} </div> ); const mapStateToProps = ({ literals }) => ({ literals }); export default connect(mapStateToProps)(App);
Lazy Loading Literals
If we want to go a step further, we could change the default export in
i18n/index.js to lazy load the literals by using the JavaScript dynamic import feature:
export default function (lang = "en") { return import(`./${lang}.json`); };
Not only the function becomes simpler, but also the literals will be lazy loaded on demand, making the bundle size smaller, meaning an app that loads faster.
Since the dynamic import returns a promise, now we need to update how we load the literals in the store as follows:
import React from "react"; import ReactDOM from "react-dom"; import { Provider } from "react-redux"; import App from "./App"; import { loadLiterals } from "store/literals"; import store from "./store"; import loadLang from "./i18n"; loadLang().then(lang => store.dispatch(loadLiterals(lang))); ReactDOM.render( <App /> , document.getElementById("root") );
Wrapping Up
We’ve seen how you can add some simple i18n functionality from scratch to your React/Redux apps. You don’t need to do things that way, and there surely are different ways to accomplish the same thing, but I hope you’ve seen that it can be easy and fun to do it yourself, and that this might be enough for simple use cases. | https://www.digitalocean.com/community/tutorials/react-create-i18n-module | CC-MAIN-2020-34 | refinedweb | 865 | 55.44 |
Learning Cython Programming — Save 50%
Expand your existing legacy applications in C using Python with this book and ebook
In this article by Philip Herron, author of the Learning Cython Programming, will start to get serious with Cython and will discuss how to describe C declarations with respect to Cython along with calling conventions and type conversion.
If you were to create an API for Python, you should write it using Cython to create a more type-safe Python API. Or, you could take the C types from Cython to implement the same algorithms in your Python code, and they will be faster because you're specifying the types and you avoid a lot of the type conversion required.
Consider you are implementing a fresh project in C. There are a few issues we always come across in starting fresh; for example, choosing the logging or configuration system we will use or implement.
With Cython, we can reuse the Python logging system as well as the ConfigParser standard libraries from Python in our C code to get a head start. If this doesn't prove to be the correct solution, we can chop and change easily. We can even extend and get Python to handle all usage. Since the Python API is very powerful, we might as well make Python do as much as it can to get us off the ground. Another question is do we want Python be our "driver" (main entry function) or do we want to handle this from our C code?
Cython cdef
In the next two examples, I will demonstrate how we can reuse the Python logging and Python ConfigParser modules directly from C code. But there are a few formalities to get over first, namely the Python initialization API and the link load model for fully embedded Python applications for using the shared library method.
It's very simple to embed Python within a C/C++ application; you will require the following boilerplate:
#include <Python.h>
int main (int argc, char ** argv)
{
Py_SetProgramName (argv [0]);
Py_Initialize ();
/* Do all your stuff in side here...*/
Py_Finalize ();
return 0;
}
Make sure you always put the Python.h header at the very beginning of each C file, because Python contains a lot of headers defined for system headers to turn things on and off to make things behave correctly on your system.
Later, I will introduce some important concepts about the GIL that you should know and the relevant Python API code you will need to use from time to time. But for now, these few calls will be enough for you to get off the ground.
Linking models
Linking models are extremely important when considering how we can extend or embed things in native applications. There are two main linking models for Cython: fully embedded Python and code, which looks like the following figure:
This demonstrates a fully embedded Python application where the Python runtime is linked into the final binary. This means we already have the Python runtime, whereas before we had to run the Python interpreter to call into our Cython module. There is also a Python shared object module as shown in the following figure:
We have now fully modularized Python. This would be a more Pythonic approach to Cython, and if your code base is mostly Python, this is the approach you should take if you simply want to have a native module to call into some native code, as this lends your code to be more dynamic and reusable.
The public keyword
Moving on from linking models, we should next look at the public keyword, which allows Cython to generate a C/C++ header file that we can include with the prototypes to call directly into Python code from C.
The main caveat if you're going to call Python public declarations directly from C is if your link model is fully embedded and linked against libpython.so; you need to use the boilerplate code as shown in the previous section. And before calling anything with the function, you need to initialize the Python module example if you have a cythonfile.pyx file and compile it with public declarations such as the following:
cdef public void cythonFunction ():
print "inside cython function!!!"
You will not only get a cythonfile.c file but also cythonfile.h; this declares a function called extern void initcythonfile (void). So, before calling anything to do with the Cython code, use the following:
/* Boiler plate init Python */
Py_SetProgramName (argv [0]);
Py_Initialize ();
/* Init our config module into Python memory */
initpublicTest ();
cythonFunction ();
/* cleanup python before exit ... */
Py_Finalize ();
Calling initcythonfile can be considered as the following in Python:
import cythonfile
Just like the previous examples, this only affects you if you're generating a fully embedded Python binary.
Logging into Python
A good example of Cython's abilities in my opinion is reusing the Python logging module directly from C. So, for example, we want a few macros we can rely on, such as info (…) that can handle VA_ARGS and feels as if we are calling a simple printf method.
I think that after this example, you should start to see how things might work when mixing C and Python now that the cdef and public keywords start to bring things to life:
import logging
cdef public void initLogging (char * logfile):
logging.basicConfig (filename = logfile,
level = logging.DEBUG,
format = '%(levelname)s %(asctime)s:
%(message)s',
datefmt = '%m/%d/%Y %I:%M:%S')
cdef public void pyinfo (char * message):
logging.info (message)
cdef public void pydebug (char * message):
logging.debug (message)
cdef public void pyerror (char * message):
logging.error (message)
This could serve as a simple wrapper for calling directly into the Python logger, but we can make this even more awesome in our C code with C99 __VA_ARGS__ and an attribute that is similar to GCC printf. This will make it look and work just like any function that is similar to printf. We can define some headers to wrap our calls to this in C as follows:
#ifndef __MAIN_H__
#define __MAIN_H__
#include <Python.h>
#include <stdio.h>
#include <stdarg.h>
#define printflike \
__attribute__ ((format (printf, 3, 4)))
extern void printflike cinfo (const char *, unsigned, const char *,
...);
extern void printflike cdebug (const char *, unsigned, const char *,
...);
extern void printflike cerror (const char *, unsigned, const char *,
...);
#define info(...) \
cinfo (__FILE__, __LINE__, __VA_ARGS__)
#define error(...) \
cerror (__FILE__, __LINE__, __VA_ARGS__)
#define debug(...) \
cdebug (__FILE__, __LINE__, __VA_ARGS__)
#include "logger.h" // remember to import our cython public's
#endif //__MAIN_H__
Now we have these macros calling cinfo and the rest, and we can see the file and line number where we call these logging functions:
void cdebug (const char * file, unsigned line,
const char * fmt, ...)
{
char buffer [256];
va_list args;
va_start (args, fmt);
vsprintf (buffer, fmt, args);
va_end (args);
char buf [512];
snprintf (buf, sizeof (buf), "%s-%i -> %s",
file, line, buffer);
pydebug (buf);
}
On calling debug ("debug message"), we see the following output:
Philips-MacBook:cpy-logging redbrain$ ./example log
Philips-MacBook:cpy-logging redbrain$ cat log
INFO 05/06/2013 12:28:24: main.c-62 -> info message
DEBUG 05/06/2013 12:28:24: main.c-63 -> debug message
ERROR 05/06/2013 12:28:24: main.c-64 -> error message
Also, you should note that we import and do everything we would do in Python as we would in here, so don't be afraid to make lists or classes and use these to help out. Remember if you had a Cython module with public declarations calling into the logging module, this integrates your applications as if it were one.
More importantly, you only need all of this boilerplate when you fully embed Python, not when you compile your module to a shared library.
Python ConfigParser
Another useful case is to make Python's ConfigParser accessible in some way from C; ideally, all we really want is to have a function to which we pass the path to a config file to receive a STATUS OK/FAIL message and a filled buffer of the configuration that we need:
from ConfigParser import SafeConfigParser, NoSectionError
cdef extern from "main.h":
struct config:
char * path
int number
cdef config myconfig
Here, we've Cythoned our struct and declared an instance on the stack for easier management:
cdef public config * parseConfig (char * cfg):
# initialize the global stack variable for our config...
myconfig.path = NULL
myconfig.number = 0
# buffers for assigning python types into C types
cdef char * path = NULL
cdef number = 0
parser = SafeConfigParser ()
try:
parser.readfp (open (cfg))
pynumber = int (parser.get ("example", "number"))
pypath = parser.get ("example", "path")
except NoSectionError:
print "No section named example"
return NULL
except IOError:
print "no such file ", cfg
return NULL
finally:
myconfig.number = pynumber
myconfig.path = pypath
return &myconfig
This is a fairly trivial piece of Cython code that will return NULL on error as well as the pointer to the struct containing the configuration:
Philips-MacBook:cpy-configparser redbrain$ ./example sample.cfg
cfg->path = some/path/to/something
cfg-number = 15
As you can see, we easily parsed a config file without using any C code. I always found figuring out how I was going to parse config files in C to be a nightmare. I usually ended up writing my own mini domain-specific language using Flex and Bison as a parser as well as my own middle-end, which is just too involved.
Cython cdef syntax and usage reference
So far, we have explored how to set up Cython and how to run "Hello World" modules. Not only that, we have also seen how we can call our own C code from Python. Let's take a look at how we can interface Python into different C declarations such as structs, enums, and typedefs. We will use this to build up a cool project at the end of the article.
Although not that interesting or fun, this small section should serve as a reference for you later on when you're building your next awesome project.
Structs
#ifndef __MYCODE_H__
#define __MYCODE_H__
struct mystruct {
char * string;
int integer;
char ** string_array;
};
extern void printStruct (struct mystruct *);
#endif //__MYCODE_H__
Now we can use Cython to interface and initialize structs and even allocate/free memory. There are a few pointers to make a note of when doing this, so let's create the code. First we need to create the Cython declaration:
cdef extern from "mycode.h":
struct mystruct:
char * string
int integer
char ** string_array
void printStruct (mystruct *)
def testStruct ():
cdef mystruct s
cdef char *array [2]
s.string = "Hello World"
s.integer = 2
array [0] = "foo"
array [1] = "bar"
s.string_array = array
printStruct (&s)
Let's look at this line by line. First off, we see the cdef keyword; this tells Cython that this is an external C declaration and that the original C declarations can be included from mycode.h; the generated code from Cython can include this to squash all warnings about undeclared symbols. Anything that is within this cdef suite, Cython will treat as a cdef. The struct looks very similar to normal C structs—just be careful with your indentation. Also be sure, even in the cdef functions, that if you want explicit C types, you need to declare this with the cdef type identifier to make sure they will be of the correct type and not just PyObjects.
There are a few subtleties with the testStruct function. We declare our struct and array on the stack with cdef as well, as this allows us to declare variables. In Cython, we have the reference operator &; this works just as in C, so we have the struct on the stack and we can pass a pointer via the reference operator just like in C. But we don't have a →operator in Cython, so when trying to access the struct (even if it is on a pointer), we simply use the .operator. Cython understands this at compile time. We also have an extension in Cython to specify fixed length arrays as shown and assignment should look very familiar. A simple makefile for this system would be as follows:
all:
clean:
rm -f *.o *.so *~ mycodepy.c
And a simple printStruct function would be as follows:
#include <stdio.h>
#include "mycode.h"
void printStruct (struct mystruct * s)
{
printf (".string = %s\n", s->string);
printf (".integer = %i\n", s->integer);
printf (".string_array = \n");
int i;
for (i = 0; i < s->integer; ++i)
printf ("\t[%i] = %s\n", i, s->string_array [i]);
}
A simple run of this in the downloaded code is as follows:
redbrain@blue-sun:~/workspace/cython-book/chapter2/c-decl-reference$ make
redbrain@blue-sun:~/workspace/cython-book/chapter2/c-decl-reference$
python
Python 2.7.3 (default, Sep 26 2012, 21:51:14)
[GCC 4.7.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from mycodepy import testStruct
>>> testStruct ()
.string = Hello World
.integer = 2
.string_array =
[0] = foo
[1] = bar
This simply demonstrates that Cython can work properly with C structs—it initialized the C struct and assigned it data correctly, as you would expect if it was from C.
Enums
Interfacing with C enums is simple. If you have the following enum in C:
enum cardsuit {
CLUBS,
DIAMONDS,
HEARTS,
SPADES
};
This can be expressed as the following Cython declaration:
cdef enum cardsuit:
CLUBS, DIAMONDS, HEARTS, SPADES
Then, use the following as the cdef declaration within our code:
cdef cardsuite card = CLUBS
Typedef and function pointers
Typed.
Scalable asynchronous servers
Using all the concepts learned in this article, I want to show you how we can use Cython to build something awesome—a complete messaging server that uses C to do all the low-level I/O and libevent to keep everything asynchronous. This means we will be using callbacks to handle the events that we will manage in the Python messaging engine. We can then define a simple protocol for a messaging system and roster. This design can be easily extended to a lot of things. To see if we are on the same page, refer to the following figure:
C sockets with libevent
For those of you who are unfamiliar with libevent, I will now give a brief overview and show the main parts of the code
What is libevent?
libevent allows us to create a socket in C, which we can use to pass the file descriptor to libevent and give it several events to care about; for example, if a client is connecting to this socket, we can tell libevent to listen for it and it call our callback. Other events such as errors (clients going offline) or reads (clients pushing up data) can also be handled in the same manner. We use libevent because it's much more scalable and well defined, and it is a far better choice than writing our own polling event loop.
Once we create a socket, we must make it non-blocking for libevent. This useful snippet of C code may or may not be familiar to you, but it's a useful one to have in your tool-belt:
int setnonblock (int fd)
{
int flags;
flags = fcntl (fd, F_GETFL);
if (flags < 0)
return flags;
flags |= O_NONBLOCK;
if (fcntl (fd, F_SETFL, flags) < 0)
return -1;
return 0;
}
Once you create a socket, you pass the resulting file descriptor to this function and then create an on-connect event for libevent:
struct event ev_accept;
event_assign (&ev_accept, evbase,
sockfd,
EV_READ|EV_PERSIST,
&callback_client_connect,
NULL);
event_add (&ev_accept, NULL);
Now we have an event that will call the callback_client_connect function. Test this server with the following:
redbrain@blue-sun:~/workspace/cython-book/chapter2/async-server/server1$
make
gcc -g -O2 -Wall -c server.c -o server.o
gcc -g -O2 -o server server.o -levent
redbrain@blue-sun:~/workspace/cython-book/chapter2/async-server/server1$
./server
In another shell or multiple shells, run telnet to act as a simple client for now:
$ telnet localhost 9080
You can now type away and see all your data and events. At the moment, this is just a dumb event-driven messaging system, but imagine how you would begin adding a messaging engine to pass messages between clients and set how you would up a protocol in C. It would take some time to map out and, in general, it would be an unpleasant experience. We can use Cython to take control of the server and create our logic in Python using callbacks.
Messaging engine
With these callbacks, we can start making use of Python very easily to make this project awesome.
Cython callbacks
If you look at cython-book/chapter2/async-server/server2, you can see the callbacks in action:
./messagingServer -c config/server.cfg -l server.log
You can also spawn multiple telnet sessions again to see some things being printed out. There is a lot going on here, so I will break it down first. If you look inside this directory, you will see pyserver.pyx and pyserver.pxd. Here, we will introduce the pseudo Cython header files: (*.pxd).
Cython PXD
The use of PXD files is very similar to that of header files in C/C++. We can simply use our cdef declarations like extern functions or struct definitions and then use the following within a *.pyx file:
cimport pyserver
Now you can just code your method prototypes like you would in C and the cimport of the PXD file will get all the definitions.
Now that you have seen how *.pxd files work, we will remove the main method from server.c so we can use Python to control the whole system. If you look at pyserver.pyx, you will see the pyinit_server function; it takes a port number. We can then from Python pass the configuration of the server from pure Python with import pyserver when we build the shared library. We also call server.c to set callbacks, which are the cdef Cython functions, and we pass their addresses to the server:
static callback conncb, discb, readcb;
void setConnect_PyCallback (callback c)
{
conncb = c;
}
void setDisconnect_PyCallback (callback c)
{
discb = c;
}
void setRead_PyCallback (callback c)
{
readcb = c;
}
Now, in each of the events that exist, we can call these callbacks simply with readcb (NULL. NULL) and we will be in Python land. You can look at the Cython functions in depth in the pyserver.pyx file; however, know that they just print out some data:
cdef void pyconnect_callback (client *c, char * args):
print c.cid, "is online..."
cdef void pydisconnect_callback (client *c, char * args):
print c.cid, "went offline..."
cdef void pyread_callback (client *c, char * args):
print c.cid, "said: ", args
These are your basic callbacks into Cython code from the native event-driven system. You can see the basic main method from the messageServer.py file. It is executable and initializes everything required for our purposes. I know this may seem a fairly niche example, but I truly believe it demonstrates how cool C/Python can be. It simply imports pyserver and calls pyinit_server with a port. With this, you can use Python to control the configuration of system-level C components very easily, which can be fiddly to do well in pure C. We let Python do it.
Python messaging engine
Now that you've seen how we can have callbacks from this system into Cython, we can start to add some logic to the system so that if you spawn multiple localhost connections, they will run concurrently. It would be good to have some Roster logic, say to just make the client address its identifier, such that there can be only one client per address. We could implement this via a simple dictionary where key is address and value is true or false for online or offline. We can query if it is online; return a yes if it is or no to kill the connection. Currently, messagingEngine.py implements a basic roster class to perform this function.
This roster class will initialize a dictionary of client objects against their name, and handleEvent will, if it's a rosterEvent, handle clients going online and offline via the Cython callbacks. The other case is if the client is already online. We return true if we want to tell the server to disconnect that client by closing the socket connection, else we return false.
A simple way to initialize the roster class is through pyserver.pyx:
from messagingEngine import Roster
roster = None
def pyinit_server (port):
global roster
roster = Roster ()
….
Now, in each of the callbacks, we can simply call roster.handleEvent (…). On running this, we can see that the same address connections are now closed, as shown in the following screenshot (only one instance is allowed to personify clients logging in to a system):
I think this gives you an idea of how easy it could be to have Python handle message passing. You can easily extend your read callbacks to fully read the buffer and use Google protocol buffers () to implement a full protocol for your system, but that's a whole project of its own.
Integration with build systems
This topic is basically dependent on the linking model you choose if you are to choose the shared-library approach. I would recommend using Python distutils. And if you are going for embedded Python, you should choose the autotools approach.
Python distutils
I just want to note how you can integrate Cython into your setup.py file; it's very simple:
from distutils.core import setup
from distutils.extension import Extension
from Cython.Distutils import build_ext
setup(
scripts = ['messagingServer.py'],
packages = ['messagingEngine'],
cmdclass = { 'build_ext' : build_ext },
ext_modules = [ Extension ("pyserver", ["pyserver.pyx",
"server.c" ]) ]
)
Just append your module sources and Cython picks up the *.pyx and *.c files. You can use setup.py as you normally would:
$ python setup.py build
$ python setup.py install
Note that to install correctly, you must package and modularize your project so that messagingEngine is now its own module:
$ mkdir messagingEngine
$ cd messagingEngine
$ mv ../messagingEngine.py .
$ touch __init__.py
$ $EDITOR __init__.py
__all__ = ['messagingEngine']
GNU/Autotools
The snippet you need to know for this would simply be as follows:
found_python=no
AC_ARG_ENABLE(
python,
AC_HELP_STRING(--enable-python, create python support),
found_python=yes
)
AM_CONDITIONAL(IS_PYTHON, test "x%found_python" = xyes)
PYLIBS=""
PYINCS=""
if test "x$found_python" = xyes; then
AC_CHECK_PROG(CYTHON_CHECK,cython,yes)
if test x"$CYTHON_CHECK" != x"yes" ; then
AC_MSG_ERROR([Please install cython])
fi
AC_CHECK_PROG(PYTHON_CONF_CHECK,python-config,yes)
PYLIBS=`python-config --libs`
PYINCS=`python-config --includes`
if test "x$PYLIBS" == x; then
AC_MSG_ERROR("python-dev not found")
fi
fi
AC_SUBST(PYLIBS)
AC_SUBST(PYINCS)
This adds the –enable-python switch to your configure script. You now have the Cython command found and the PYLIBS and PYINCS variables for the compilation flags you need to compile. Now you need a snippet to understand how to compile *.pyx in your sources in automake:
bin_PROGRAMS = myprog
ACLOCAL_AMFLAGS = -I etc
CFLAGS += -I$(PYINCS)
LIBTOOL_DEPS = @LIBTOOL_DEPS@
libtool: $(LIBTOOL_DEPS)
$(SHELL) ./config.status libtool
SUFFIXES = .pyx
.pyx.c:
@echo " CPY " $<
@cython -2 -o $@ $<
myprog_SOURCES = \
src/bla.pyx \
...
myprog_LDADD = \
$(PYLIBS)
When you're comfortable with understanding where your code is and the linking models, you can choose the build systems. At that point, embedding Python becomes very easy–almost like second nature.
Summary
This whole article dealt with trying to make you more comfortable with Cython and aimed to show you that it is just like writing Python code. If you start using public and cdef regularly, you will see that you can mix C and Python code as if it was all the same language! Better yet, in each language, you get access to everything that language has. So, if you have Twisted installed in Python, you can access Twisted when you're in Python land; and if you're in C land, you can use fcntl or ioctl!
Further resources on this subject:
- Tips & Tricks on MySQL for Python [Article]
- Python Multimedia: Working with Audios [Article]
- Scaling your Application Across Nodes with Spring Python's Remoting [Article]
About the Author :
Philip Herron
Philip Herron is an avid software engineer who focuses his passion towards compilers and virtual machine implementations. When he was first accepted to Google Summer of Code 2010, he used inspiration from Paul Biggar's PhD on optimization of dynamic languages to develop a proof of concept GCC frontend to compile Python. This project sparked his deep interest of how Python works.
After completing a consecutive year on the same project in 2011, Philip decided to apply for Cython under the Python foundation to gain a deeper appreciation of the standard Python implementation. Through this, he started leveraging the advantages of Python to control the logic in systems or even to add more high-level interfaces such as embedding Twisted web servers for REST calls to a system-level piece of software without writing any C code.
Currently Philip is employed by NYSE Euronext in Belfast Northern Ireland, working on multiprocessing systems. But he spends his evenings hacking on GCCPy, Cython, and GCC. In the past, he has worked with WANdisco as an Apache Hadoop developer and as an intern with SAP Research on cloud computing.
Post new comment | http://www.packtpub.com/article/understanding-cython | CC-MAIN-2014-15 | refinedweb | 4,193 | 60.85 |
- NAME
- VERSION
- SYNOPSYS
- DESCRIPTION
- Module subroutines
- Object methods
- new(%params)
- deploy()
- del_config($attribute, $filter)
- destroy()
- exec($cmd)
- get_config($paramater, $filter, $flag)
- get_lxc_path()
- get_template()
- get_utsname()
- is_existing()
- is_running()
- is_stopped()
- put($input, $destination)
- set_config($attribute, $value, $flag)
- set_template($template)
- start()
- stop()
- AUTHOR
- BUGS AND INFO
NAME
Linux::LXC - Manage your LXC containers.
VERSION
1.0003
SYNOPSYS
use Linux::LXC qw(ALLOW_UNDEF); my $c = Linux:
This module helps you to manage LXC container. Each container will be represented by an object of this module. Some module subroutine are also usable without any object instance.
Module subroutines
get_existing_containers()
Will return an array with the name of all LXC containers existing on the system.
get_running_containers()
Will return an array with the name of all LXC containers currently running on the system.
get_stopped_containers()
Will return an array with the name of all LXC containers currently stopped on the system.
Object methods
new(%params)
Instanciate a new Linux::LXC object. Params that can be initialized:
- utsname
Mandatory parameter. Set the utsname of the container.
- template
Mandatory only if you planned to deploy the container. Set the LXC template to use for deploying the container.
- return
A Linux::LXC object.
deploy()
Will deploy the container. Concretly, this method will check that the container is not existing, and after execute `lxc-deploy -n <utsname> -t <template>` shell command.
- return
The previous Linux::LXC object.
del_config($attribute, $filter)
Will delete all LXC configuration container attributes that respect the $filter pattern.
- $attribute
The attribute to delete.
- $filter
A regex or undef. It will be compared with all $attribute values. The ones that match will be removed. If undef, all values will be removed.
- return
The number of elements deleted.
destroy()
Will stop the container if it's existing and destroy it with the shell `lxc-destroy -n <utsname>` shell command.
- return
The previous Linux::LXC object.
exec($cmd)
Will execute the $cmd command in the container. This method use the IPC::Run method, that allow us to don't think at all about priorities between shell operators. Eg: exec('echo "Hello" >> ~/file.txt') will write the file on the container, and not on the instance that actually runs the commnand.
- return (if want array)
($result, $stdout, $stderr); $result true if shell command return 0 (it usually means that the command was a success), false otherwise. $stdout and $stderr are self-explaining.
- return (if want scalar)
True if shell command return 0 (it usually means that the command was a success), false otherwise.
get_config($paramater, $filter, $flag)
Get an array of values corresponding to all data that match the LXC container configuration.
- $parameter
The parameter to match.
- $filter
An regex, or undef. Values corresponding of the parameter to keep. If undef, we will keep all of them.
- $flags
ALLOW_EMPTY: don't croak if the parameter asked was not found.
- return
An array with all matched results.
get_lxc_path()
Return the path to the LXC instance of the container. By default it's /var/lib/lxc/<utsname>/. The path is the folder that contains rootfs and config file.
get_template()
Get the template of the LXC instance.
get_utsname()
Will return the utsname of the container.
is_existing()
Retrun true if the container with the given utsname exists. False otherwise.
is_running()
Retrun true if the container with the given utsname is running. False otherwise.
is_stopped()
Return true if the container with the given utsname is stopped. False otherwise.
put($input, $destination)
Will copy the $input file or folder on the $destination path in the container instance. This method also takes care of ownership and will chown $destination to the container root uid. The ownership will also be set for all intermediate folders we have to create.
- $input
String corresponding to a relative or absolute path of a folder or a file we want to copy on the container root fs. This path should be readable by the user executing this script.
- $output
Location on the container to put the file or folder. This path has to be absolute.
set_config($attribute, $value, $flag)
Will set a LXC attribute in container configuration. The update can occurs in two modes: addition or erasing. In the first one, a new attribute with the given value will always be created. In the second case, the first previous value already existing of $attribute will be updated with the new $value. If none is found, the atribute will also be created.
- $attribute
Attribute to set.
- $value
Value to give to the attribute.
- $flag
Can be ADDITION_MODE, ERASING_MODE or undef. If undef, ERASING_MODE will occur.
set_template($template)
Will set the $template name to the given container. Note that this action should be done before the deployment.
start()
Start the container.
stop()
Stop the container.
AUTHOR
Spydemon <jsaipakoimetr@spyzone.fr>
BUGS AND INFO
A bug tracker is available for this module at the address: but inscriptions are closed because of spamming issue. If you want an account for contributing, report any enhancement suggestion or bug report, please send me an email.
This software is copyright (c) 2018 by Spydemon. <>. | https://metacpan.org/pod/Linux::LXC | CC-MAIN-2019-43 | refinedweb | 832 | 60.11 |
1
Hello, RxSwift!
Written by Marin Todorov
This book aims to introduce you, the reader, to the RxSwift library and to writing reactive iOS apps with RxSwift.
But what exactly is RxSwift? Here’s a good definition:
RxSwift is a library for composing asynchronous and event-based code by using observable sequences and functional style operators, allowing for parameterized execution via schedulers.
Sounds complicated? Don’t worry if it does. Writing reactive programs, understanding the many concepts behind them, and navigating a lot of the relevant, commonly used lingo might be intimidating — especially if you try to take it all in at once, or when you haven’t been introduced to it in a structured way.
That’s the goal of this book: to gradually introduce you to the various RxSwift APIs and general Rx concepts by explaining how to use each of the APIs and build intuition about how reactive programming can serve you, all while covering RxSwift’s practical usage in iOS apps.
You’ll start with the basic features of RxSwift, and then gradually work through intermediate and advanced topics. Taking the time to exercise new concepts extensively as you progress will make it easier to master RxSwift by the end of the book. Rx is too broad of a topic to cover completely in a single book; instead, we aim to give you a solid understanding of the library so that you can continue developing Rx skills on your own.
We still haven’t quite established what RxSwift is though, have we? Let’s start with a simple, understandable definition and progress to a better, more expressive one as we waltz through the topic of reactive programming later in this chapter.
RxSwift, in its essence, simplifies developing asynchronous programs by allowing your code to react to new data and process it in a sequential, isolated manner.
As an iOS app developer, this should be much more clear and tell you more about what RxSwift is, compared to the first definition you read earlier in this chapter.
Even if you’re still fuzzy on the details, it should be clear that RxSwift helps you write asynchronous code. And you know that developing good, deterministic, asynchronous code is hard, so any help is quite welcome!
Introduction to asynchronous programming
If you tried to explain asynchronous programming in a simple, down to earth language, you might come up with something along the lines of the following.
An iOS app, at any moment, might be doing any of the following things and more:
- Reacting to button taps
- Animating the keyboard as a text field loses focus
- Downloading a large photo from the Internet
- Saving bits of data to disk
- Playing audio
All of these things seemingly happen at the same time. Whenever the keyboard animates out of the screen, the audio in your app doesn’t pause until the animation has finished, right?
All the different bits of your program don’t block each other’s execution. iOS offers you various kinds of APIs that allow you to perform different pieces of work on different threads, across different execution contexts, and perform them across the different cores of the device’s CPU.
Writing code that truly runs in parallel, however, is rather complex, especially when different bits of code need to work with the same pieces of data. It’s hard to know for sure which piece of code updates the data first, or which code read the latest value.
Cocoa and UIKit asynchronous APIs
Apple has always provided numerous APIs in the iOS SDK that help you write asynchronous code. In fact the best practices on how to write asynchronous code on the platform have evolved many times over the years.
You’ve probably used many of these in your projects and probably haven’t given them a second thought because they are so fundamental to writing mobile apps.
To mention few, you have a choice of:
NotificationCenter: To execute a piece of code any time an event of interest happens, such as the user changing the orientation of the device or the software keyboard showing or hiding on the screen.
- The delegate pattern: Lets you define an object that acts on behalf, or in coordination with, another object.
- Grand Central Dispatch: To help you abstract the execution of pieces of work. You can schedule blocks of code to be executed sequentially, concurrently, or after a given delay.
- Closures: To create detached pieces of code that you can pass around in your code, and finally
- Combine: Apple’s own framework for writing reactive asynchronous code with Swift, introduced in and available from iOS 13.
Depending on which APIs you chose to rely on, the degree of difficulty to maintain your app in a coherent state varies largely.
For example if you’re using some of the older Apple APIs like the delegate pattern or notification center you need to do a lot of hard work to keep your app’s state consistent at any given time.
If you have a shiny new codebase using Apple’s Combine, then (of course) you’re already verse with reactive programming - congrats and kudos!
To wrap up this section and put the discussion into a bit more context, you’ll compare two pieces of code: one synchronous and one asynchronous.
Synchronous code
Performing an operation for each element of an array is something you’ve done plenty of times. It’s a very simple yet solid building block of app logic because it guarantees two things: It executes synchronously, and the collection is immutable while you iterate over it.
Take a moment to think about what this implies. When you iterate over a collection, you don’t need to check that all elements are still there, and you don’t need to rewind back in case another thread inserts an element at the start of the collection. You assume you always iterate over the collection in its entirety at the beginning of the loop.
If you want to play a bit more with these aspects of the
for loop, try this in a playground:
var array = [1, 2, 3] for number in array { print(number) array = [4, 5, 6] } print(array)
Is
array mutable inside the
for body? Does the collection that the loop iterates over ever change? What’s the sequence of execution of all commands? Can you modify
number if you need to?
Asynchronous code
Consider similar code, but assume each iteration happens as a reaction to a tap on a button. As the user repeatedly taps on the button, the app prints out the next element in an array:
var array = [1, 2, 3] var currentIndex = 0 // This method is connected in Interface Builder to a button @IBAction private func printNext() { print(array[currentIndex]) if currentIndex != array.count - 1 { currentIndex += 1 } }
Think about this code in the same context as you did for the previous one. As the user taps the button, will that print all of the array’s elements? You really can’t say. Another piece of asynchronous code might remove the last element, before it’s been printed.
Or another piece of code might insert a new element at the start of the collection after you’ve moved on.
Also, you assume
currentIndex is only mutated by
printNext(), but another piece of code might modify
currentIndex as well — perhaps some clever code you added at some point after crafting the above method.
You’ve likely realized that some of the core issues with writing asynchronous code are: a) the order in which pieces of work are performed and b) shared mutable data.
Luckily, these are some of RxSwift’s strong suits!
Next, you need a good primer on the language that will help you start understanding how RxSwift works and what problems it solves; this will ultimately let you move past this gentle introduction and into writing your first Rx code in the next chapter.
Asynchronous programming glossary
Some of the language in RxSwift is so tightly bound to asynchronous, reactive, and/or functional programming that it will be easier if you first understand the following foundational terms.
In general, RxSwift tries to address the following issues:
1. State, and specifically, shared mutable state
State is somewhat difficult to define. To understand state, consider the following practical example.
When you start your laptop it runs just fine, but, after you use it for a few days or even weeks, it might start behaving weirdly or abruptly hang and refuse to speak to you. The hardware and software remains the same, but what’s changed is the state. As soon as you restart, the same combination of hardware and software will work just fine once more.
The data in memory, the one stored on disk, all the artifacts of reacting to user input, all traces that remain after fetching data from cloud services — the sum of these is the state of your laptop.
Managing the state of your app, especially when shared between multiple asynchronous components, is one of the issues you’ll learn how to handle in this book.
2. Imperative programming
Imperative programming is a programming paradigm that uses statements to change the program’s state. Much like you would use imperative language while playing with your dog — “Fetch! Lay down! Play dead!” — you use imperative code to tell the app exactly when and how to do things.
Imperative code is similar to the code that your computer understands. All the CPU does is follow lengthy sequences of simple instructions. The issue is that it gets challenging for humans to write imperative code for complex, asynchronous apps — especially when shared mutable state is involved.
For example, take this code, found in
viewDidAppear(_:) of an iOS view controller:
override func viewDidAppear(_ animated: Bool) { super.viewDidAppear(animated) setupUI() connectUIControls() createDataSource() listenForChanges() }
There’s no telling what these methods do. Do they update properties of the view controller itself? More disturbingly, are they called in the right order? Maybe somebody inadvertently swapped the order of these method calls and committed the change to source control. Now the app might behave differently due to the swapped calls.
3. Side effects
Now that you know more about mutable state and imperative programming, you can pin down most issues with those two things to side effects.
Side effects represent any changes to the state outside of your code’s current scope. For example, consider the last piece of code in the example above.
connectUIControls() probably attaches some kind of event handler to some UI components. This causes a side effect, as it changes the state of the view: The app behaves one way before executing
connectUIControls() and differently after that.
Any time you modify data stored on disk or update the text of a label on screen, you cause a side effect.
Side effects are not bad in themselves. After all, causing side effects is the ultimate goal of any program! You need to change the state of the world somehow after your program has finished executing.
Running for a while and doing nothing makes for a pretty useless app.
The important aspect of producing side effects is doing so in a controlled way. You need to be able to determine which pieces of code cause side effects, and which simply process and output data.
RxSwift tries to address the issues (or problems) listed above by tackling the following couple of concepts.
4. Declarative code
In imperative programming, you change state at will. In functional programming, you aim to minimize the code that causes side effects. Since you don’t live in a perfect world, the balance lies somewhere in the middle. RxSwift combines some of the best aspects of imperative code and functional code.
Declarative code lets you define pieces of behavior. RxSwift will run these behaviors any time there’s a relevant event and provide an immutable, isolated piece of data to work with.
This way, you can work with asynchronous code, but make the same assumptions as in a simple
for loop: that you’re working with immutable data and can execute code in a sequential, deterministic way.
5. Reactive systems
Reactive systems is a rather abstract term and covers web or iOS apps that exhibit most or all of the following qualities:
- Responsive: Always keep the UI up to date, representing the latest app state.
- Resilient: Each behavior is defined in isolation and provides for flexible error recovery.
- Elastic: The code handles varied workload, often implementing features such as lazy pull-driven data collections, event throttling, and resource sharing.
- Message-driven: Components use message-based communication for improved reusability and isolation, decoupling the lifecycle and implementation of classes.
Now that you have a good understanding of the problems RxSwift helps solve and how it approaches these issues, it’s time to talk about the building blocks of Rx and how they play together.
Foundation of RxSwift
Reactive programming isn’t a new concept; it’s been around for a fairly long time, but its core concepts have made a noticeable comeback over the last decade.
In that period, web apps have became more involved and are facing the issue of managing complex asynchronous UIs. On the server side, reactive systems (as described above) have become a necessity.
A team at Microsoft took on the challenge of solving the problems of asynchronous, scalable, real-time app development that we’ve discussed in this chapter. Sometime around 2009 they offered a new client and server side framework called Reactive Extensions for .NET (Rx).
Rx for .NET has been open source since 2012 permitting other languages and platforms to reimplement the same functionality, which turned Rx into a cross-platform standard.
Today, you have RxJS, RxKotlin, Rx.NET, RxScala, RxSwift and more. All strive to implement the same behavior and same expressive APIs, based on the Reactive Extensions specification. Ultimately, a developer creating an iOS app with RxSwift can freely discuss app logic with another programmer using RxJS on the web.
Note: More about the family of Rx implementations at.
Like the original Rx, RxSwift also works with all the concepts you’ve covered so far: It tackles mutable state, it allows you to compose event sequences and improves on architectural concepts such as code isolation, reusability and decoupling.
In this book, you are going to cover both the cornerstone concepts of developing with RxSwift as well as real-world examples of how to use them in your apps.
The three building blocks of Rx code are observables, operators and schedulers. The sections below cover each of these in detail.
Observables
Observable<Element> provides the foundation of Rx code: the ability to asynchronously produce a sequence of events that can “carry” an immutable snapshot of generic data of type
Element. In the simplest words, it allows consumers to subscribe for events, or values, emitted by another object over time.
The
Observable class allows one or more observers to react to any events in real time and update the app’s UI, or otherwise process and utilize new and incoming data.
The
ObservableType protocol (to which
Observable conforms) is extremely simple. An
Observable can emit (and observers can receive) only three types of events:
- A
nextevent: An event that “carries” the latest (or “next”) data value. This is the way observers “receive” values. An
Observablemay emit an indefinite amount of these values, until a terminating event is emitted.
- A
completedevent: This event terminates the event sequence with success. It means the
Observablecompleted its lifecycle successfully and won’t emit additional events.
- An
errorevent: The
Observableterminates with an error and will not emit additional events.
When talking about asynchronous events emitted over time, you can visualize an observable stream of integers on a timeline, like so:
This simple contract of three possible events an
Observable can emit is anything and everything in Rx. Because it is so universal, you can use it to create even the most complex app logic.
Since the observable contract does not make any assumptions about the nature of the
Observable or the observer, using event sequences is the ultimate decoupling practice.
You don’t ever need to use delegate protocols or to inject closures to allow your classes to talk to each other.
To get an idea about some real-life situations, you’ll look at two different kinds of observable sequences: finite and infinite.
Finite observable sequences
Some observable sequences emit zero, one or more values, and, at a later point, either terminate successfully or terminate with an error.
In an iOS app, consider code that downloads a file from the internet:
- First, you start the download and start observing for incoming data.
- You then repeatedly receive chunks of data as parts of the file arrive.
- In the event the network connection goes down, the download will stop and the connection will time out with an error.
- Alternatively, if the code downloads all the file’s data, it will complete with success.
This workflow accurately describes the lifecycle of a typical observable. Take a look at the related code below:
API.download(file: "...") .subscribe( onNext: { data in // Append data to temporary file }, onError: { error in // Display error to user }, onCompleted: { // Use downloaded file } )
API.download(file:) returns an
Observable<Data> instance, which emits
Data values as chunks of data fetched over the network.
You subscribe for
next events by providing the
onNext closure. In the downloading example, you append the data to a temporary file stored on disk.
You subscribe for an
error by providing the
onError closure. In this closure, you can display the
error.localizedDescription in an alert box or otherwise handle your error.
Finally, to handle a
completed event, you provide the
onCompleted closure, where you can push a new view controller to display the downloaded file or anything else your app logic dictates.
Infinite observable sequences
Unlike file downloads or similar activities, which are supposed to terminate either naturally or forcefully, there are other sequences which are simply infinite. Often, UI events are such infinite observable sequences.
For example, consider the code you need to react to device orientation changes in your app:
- You add your class as an observer to
UIDeviceOrientationDidChangenotifications from
NotificationCenter.
- You then need to provide a method callback to handle orientation changes. It needs to grab the current orientation from
UIDeviceand react accordingly to the latest value.
This sequence of orientation changes does not have a natural end. As long as there is a device, there is a possible sequence of orientation changes. Further, since the sequence is virtually infinite and stateful, you always have an initial value at the time you start observing it.
It may happen that the user never rotates their device, but that doesn’t mean the sequence of events is terminated. It just means there were no events emitted.
In RxSwift, you could write code like this to handle device orientation:
UIDevice.rx.orientation .subscribe(onNext: { current in switch current { case .landscape: // Re-arrange UI for landscape case .portrait: // Re-arrange UI for portrait } })
UIDevice.rx.orientation is a fictional control property that produces an
Observable<Orientation> (this is very easy to code yourself; you’ll learn how in the next chapters). You subscribe to it and update your app UI according to the current orientation. You skip the
onError and
onCompleted arguments, since these events can never be emitted from that observable.
Operators
ObservableType and the implementation of the
Observable class include plenty of methods that abstract discrete pieces of asynchronous work and event manipulations, which can be composed together to implement more complex logic. Because they are highly decoupled and composable, these methods are most often referred to as operators.
Since these operators mostly take in asynchronous input and only produce output without causing side effects, they can easily fit together, much like puzzle pieces, and work to build a bigger picture.
For example, take the mathematical expression:
(5 + 6) * 10 - 2.
In a clear, deterministic way, you can apply the operators
*,
( ),
+ and
- in their predefined order to the pieces of data that are their input, take their output and keep processing the expression until it’s resolved.
In a somewhat similar manner, you can apply Rx operators to the events emitted by an
Observable to deterministically process inputs and outputs until the expression has been resolved to a final value, which you can then use to cause side effects.
Here’s the previous example about observing orientation changes, adjusted to use some common Rx operators:
UIDevice.rx.orientation .filter { $0 != .landscape } .map { _ in "Portrait is the best!" } .subscribe(onNext: { string in showAlert(text: string) })
Each time
UIDevice.rx.orientation produces either a
.landscape or
.portrait value, RxSwift will apply
filter and
map to that emitted piece of data.
First,
filter will only let through values that are not
.landscape. If the device is in landscape mode, the subscription code will not get executed because
filter will suppress these events.
In case of
.portrait values, the
map operator will take the
Orientation type input and convert it to a
String output — the text
"Portrait is the best!"
Finally, with
subscribe, you subscribe for the resulting
next event, this time carrying a
String value, and you call a method to display an alert with that text onscreen.
The operators are also highly composable — they always take in data as input and output their result, so you can easily chain them in many different ways achieving so much more than what a single operator can do on its own!
As you work through the book, you will learn about more complex operators that abstract more involved pieces of asynchronous work.
Schedulers
Schedulers are the Rx equivalent of dispatch queues or operation queues — just on steroids and much easier to use. They let you define the execution context of a specific piece of work.
RxSwift comes with a number of predefined schedulers, which cover 99% of use cases and hopefully means you will never have to go about creating your own scheduler.
In fact, most of the examples in the first half of this book are quite simple and generally deal with observing data and updating the UI, so you won’t look into schedulers at all until you’ve covered the basics.
That being said, schedulers are very powerful.
For example, you can specify that you’d like to observe
next events on a
SerialDispatchQueueScheduler, which uses Grand Central Dispatch to run your code serially on a given queue.
ConcurrentDispatchQueueScheduler will run your code concurrently, while
OperationQueueScheduler will allow you to schedule your subscriptions on a given
OperationQueue.
Thanks to RxSwift, you can schedule your different pieces of work of the same subscription on different schedulers to achieve the best performance fitting your use-case.
RxSwift will act as a dispatcher between your subscriptions (on the left-hand side below) and the schedulers (on the right-hand side), sending the pieces of work to the correct context and seamlessly allowing them to work with each other’s output.
To read this diagram, follow the colored pieces of work in the sequence they were scheduled
(1, 2, 3, ...) across the different schedulers. For example:
- The blue network subscription starts with a piece of code
(1)that runs on a custom
OperationQueue-based scheduler.
- The data output by this block serves as the input of the next block
(2), which runs on a different scheduler, which is on a concurrent background GCD queue.
- Finally, the last piece of blue code
(3)is scheduled on the Main thread scheduler in order to update the UI with the new data.
Even if it looks very interesting and quite handy, don’t bother too much with schedulers right now. You’ll return to them later in this book.
App architecture
It’s worth mentioning that RxSwift doesn’t alter your app’s architecture in any way; it mostly deals with events, asynchronous data sequences and a universal communication contract.
It’s also important to note that you definitely do not have to start a project from scratch to make it a reactive app; you can iteratively refactor pieces of an exiting project or simply use RxSwift when building new features for your app.
You can create apps with Rx by implementing a Model-View-Controller architecture, Model-View-Presenter, or Model-View-ViewModel (MVVM), or any other pattern that makes your life easier.
RxSwift and MVVM specifically do play nicely together. The reason is that a ViewModel allows you to expose
Observable properties, which you can bind directly to UIKit controls in your View controller’s glue code. This makes binding model data to the UI very simple to represent and to code:
Towards the end of this book, you’ll look into that pattern and how to implement it with RxSwift. All other examples in the book use the MVC architecture in order to keep the sample code simple and easy to understand.
RxCocoa
RxSwift is the implementation of the common, platform-agnostic, Rx specification. Therefore, it doesn’t know anything about any Cocoa or UIKit-specific classes.") })
RxCocoa adds the
rx.isOn property (among others) to the
UISwitch class so you can subscribe to useful events as reactive
Observable sequences.
Further, RxCocoa adds the
rx namespace to
UITextField,
URLSession,
UIViewController and many more, and even lets you define your own reactive extensions under this namespace, which you’ll learn more about later in this book.
Installing RxSwift
RxSwift is open-source and available for free at.
RxSwift is distributed under the MIT license, which in short allows you to include the library in free or commercial software, on an as-is basis. As with all other MIT-licensed software, the copyright notice should be included in all apps you distribute.
There is plenty to explore in the RxSwift repository. It includes the RxSwift, RxCocoa, and RxRelay libraries, but you will also find RxTest and RxBlocking in there, which allow you to write tests for your RxSwift code.
Besides all the great source code (definitely worth peeking into), you will find Rx.playground, which interactively demonstrates many of the operators. Also check out RxExample, which is a great showcase app that demonstrates many of the concepts in practice.
You can install RxSwift/RxCocoa in few different ways - either via Xcode’s built-in dependency management, via Cocoapods, or Carthage.
RxSwift via CocoaPods
You can install RxSwift via CocoaPods like any other CocoaPod. A typical Podfile would look something like this:
use_frameworks! target 'MyTargetName' do pod 'RxSwift', '~> 5.1' pod 'RxCocoa', '~> 5.1' end
Of course, you can include just RxSwift, both RxSwift and RxCocoa, or even all the libraries found in the GitHub repository.
RxSwift via Carthage
Installing RxSwift via Carthage is almost equally streamlined. First, make sure you’ve installed the latest version of Carthage from here:.
In your project, create a new file named Cartfile and add the following line to it:
github "ReactiveX/RxSwift" ~> 5.1
Next, within the folder of your project execute
carthage update.
This will download the source code of all libraries included in the RxSwift repository and build them, which might take some time. Once the process finishes, find the resulting framework files in the Carthage subfolder created inside the current folder and link them in your project.
Build once more to make sure Xcode indexes the newly added frameworks, and you’re ready to go.
Installing RxSwift in the book projects
The projects in this book all come with a completed Podfile to use with CocoaPods, but without RxSwift itself installed, to keep the download size of the book projects light.
Before you start working on the book, make sure you have the latest version of CocoaPods installed. You need to do that just once before starting to work on the book’s projects. Usually executing this in Terminal will suffice:
sudo gem install cocoapods
If you want to know more, visit the CocoaPods website:.
At the start of each chapter, you will be asked to open the starter project for that chapter and install RxSwift in the starter project. This is an easy operation:
- In the book folder, find the directory matching the name of the chapter you are working on.
- Copy the starter folder in a convenient location on your computer. A location in your user folder is a good idea.
- Open the built-in Terminal.app or another one you use on daily basis and navigate to the starter folder. Type
cd /users/yourname/path/to/starter, replacing the example path with the actual path on your computer.
- In the chapters you’ll be using a Playground, simply run
./bootstrap.sh, which will fetch RxSwift from GitHub, pre-build the framework, and then automatically open Xcode for you so you can start writing some code.
- In chapters you’ll be using a standard Xcode project, type
pod installto fetch RxSwift from GitHub and install it in the chapter project. Find the newly created
.xcworkspacefile and launch it. Build the workspace one time in Xcode.
You’re now ready to work through the chapter!
Note: While all playgrounds were tested under Xcode 11, Xcode 12 suffers from a myriad of issues related to playground support with third-party dependencies, such as RxSwift. If one of the provided playgrounds in this book doesn’t work for you, we suggest copy and pasting the code from the playgrounds into a regular project with RxSwift embedded into it, or working with Xcode 11 in regards to these specific chapters.
RxSwift and Combine
In this introductory chapter you got a taste of what RxSwift is all about. We spoke about some of the benefits of writing reactive code with RxSwift over using more traditional APIs like notification center and delegates.
Before wrapping up, it’s definitely worth expanding a bit on what we already mentioned earlier - Apple’s own reactive framework called Combine.
RxSwift and Combine (as well as other reactive programming frameworks in Swift) share a lot of common language and very similar concepts.
RxSwift is an older, well established framework with some of its own, original concepts, operator names and type variety mainly due to its multi-platform cross-language standard, which works also on Linux which is great for Server-Side Swift. It’s also open source so you can, if you so wish, contribute directly to its core, and see exactly how specific portions of it work. It’s compatible with all Apple platform versions that support Swift all the way back to iOS 8.
Combine is Apple’s new and shiny framework that covers similar concepts but tailored specifically towards Swift and Apple’s own platforms. It shares a lot of the common language with the Swift standard library so the APIs feel very familiar even to newcomers. It supports only the newer Apple platforms starting at iOS 13, macOS 10.15, etc. It is unfortunately not open-source as of today, and does not support Linux.
Luckily, since RxSwift and Combine resemble each other so closely, your RxSwift knowledge is easily transferable to Combine, and vice-versa. And projects such as RxCombine () allow you to mix-and-match RxSwift Observables and Combine Publishers based on your needs.
If you’d like to learn more about Combine - we’ve created the definitive book on that framework too “Combine: Asynchronous Programming with Swift” which you can check out here:
Community
The RxSwift project is alive and buzzing with activity, not only because Rx is inspiring programmers to create cool software with it, but also due to the positive nature of the community that formed around this project.
The RxSwift community is very friendly, open minded and enthusiastic about discussing patterns, common techniques or just helping each other.
Besides the official RxSwift repository, you’ll find plenty of projects created by Rx enthusiasts here:.
Even more Rx libraries and experiments, which spring up like mushrooms after the rain, can be found, here:
Probably the best way to meet many of the people interested in RxSwift is the Slack channel dedicated to the library:.
The Slack channel has almost 8,000 members! Day-to-day topics are: helping each other, discussing potential new features of RxSwift or its companion libraries, and sharing RxSwift blog posts and conference talks.
Where to go from here?
This chapter introduced you to many of the problems that RxSwift addresses. You learned about the complexities of asynchronous programming, sharing mutable state, causing side effects and more.
You haven’t written any RxSwift code yet, but you now understand why RxSwift is a good idea and you’re aware of the types of problems it solves. This should give you a good start as you work through the rest of the book.
And there is plenty to work through. You’ll start by creating very simple observables and work your way up to complete real-world apps using MVVM architecture.
Move right on to Chapter 2, “Observables”! | https://www.raywenderlich.com/books/rxswift-reactive-programming-with-swift/v4.0/chapters/1-hello-rxswift | CC-MAIN-2021-04 | refinedweb | 5,504 | 60.45 |
This is it! Thank you. Quoting "Andrew Bennetts" <andrew at bemusement.org>: > vitaly at synapticvision.com wrote: > [...] >> def abc1(self): >> if t.test() is None: >> raise Exception("Error11") >> else: >> d = defer.Deferred() >> d.callback(1) >> return d >> >> >> and basically, I've expected in case of exception >> self.handleFailure1() to be called, but I don't see it happen. > > This is a function that either returns a Deferred or raises an > exception. This > isn't a Twisted issue, it's simply a Python one: in > “func1().func2()”, if func1 > raises an exception then func2 will not be invoked. That's > fundamental to what > raising an exception in Python means. > > If you want to convert this to something that always returns a > Deferred, you can > either rewrite it, e.g. using “return > defer.fail(Exception('Error11'))”, or use > maybeDeferred which will intercept exceptions for you, e.g.: > > return ( > maybeDeferred(self.abc1). > addErrback(self.handleFailure1). > # etc... > ) > > You can find maybeDeferred in the twisted.internet.defer module. > > -Andrew. > > > _______________________________________________ > Twisted-Python mailing list > Twisted-Python at twistedmatrix.com > > | http://twistedmatrix.com/pipermail/twisted-python/2009-October/020750.html | CC-MAIN-2014-15 | refinedweb | 174 | 53.78 |
For some reason the google spreadsheets are like molasses in February on my box -- it's like a 2GHz P4 but my Linux distro is old enough that I'm running ffox2.16 -- i think there are other CPU-hogs on the box -- RAM-hogs too.) I told him that it looked pretty slick though....
He said yeah it is, but without latitude/longitude Google won't map locations from the spreadsheet. That seemed odd to me, but it also seemed to me that if you look for an address like "950 santa cruz, menlo park, ca 94025", the lat/lng are returned in the HTML that maps.google.com sends back.
Oooh, I thought, "lynx -dump -source" might be my friend here. Then use PERL or Python to extract the numbers... Then I had a better thought: use one language, one program, rather than lynx and shell and sed/perl/python.
So I hacked together a little Python script that looks basically like this:
#!/usr/bin/python -ttPython made it easy to throw that together. so this works great from the command line:
# vim:sw=4:ts=8:et
import httplib
import re
import string
import sys
DEBUG=0
# DEBUG=1
def main(argv):
"given an address in argv, give human-readable lat/long"
the_addr = re.sub('\s+', '+', string.join(argv))
map_html = ' ' + addr2page(the_addr)
print "latitude:", interp_coords(map_html, 'lat', 'south', 'north')
print "longitude:", interp_coords(map_html, 'lng', 'west', 'east')
sys.exit(0)
def addr2page(the_addr):
"""given an address string, return a long html string from google maps.
address string should contain no whitespace."""
map_site = 'maps.google.com'
map_query = '/maps?q=' + the_addr
if DEBUG:
print "DEBUG: if this were for real, we'd go to"
print "DEBUG: http://" + map_site + map_query
print "DEBUG: but it's not, so let's not and say we did"
return '<html> lng:-122.3456 lat:33.4567'
conn = httplib.HTTPConnection(map_site)
conn.request('GET', map_query)
r1 = conn.getresponse()
if r1.status != 200:
# Trouble in paradise
print >> sys.stderr, r1.status, r1.reason
the_page = r1.read() # uselesss
conn.close()
sys.exit(1)
# Got 'OK' so continue
the_page = r1.read()
conn.close()
return the_page
def interp_coords(html_string, LL, if_neg, if_pos):
"""return a substring of the form '(west) -123.456' from html_string
given a prefix 'lat:' or 'lng:' (supplied in 'LL')
if_neg => what to put in parens if the string is (duh) negative
if_pos => what you would think"""
coord = re.search('\W' + LL + ':([-+]?[.0-9]+)', html_string)
if coord is not None:
coord = coord.group(1)
if coord.startswith('-'):
suf = if_neg # i.e., it was negative
else:
suf = if_pos
return '(' + suf + ') ' + coord
return '??'
if __name__ == '__main__':
main(sys.argv[1:])
$ ./a2l.py 950 santa cruz, menlo park, ca 94025Of course, Chris isn't a command-line kind of guy. So I ended up making this into a CGI script. For doing this, I often turn to a site that explains the basics -- I just googled on "how CGI works" (no quotes) and found this site:, which was very helpful. I used the Python library "cgi" to handle parameters. Worked like a champ.
latitude: (north) 37.449289999999998
longitude: (west) -122.187619
$
Then Chris told me about a list of addresses separated by tabs. Python made this a piece of cake. First, I took "tab-separated list of addresses" literally, and did this:
addrs = form.getvalue(form_q)This actually includes the other thing: if you're typing in an HTML form and hit the TAB key, what usually happens? What happens to me is I end up going to the next field in the form. So I decided to just make an alternative page, which had maybe a couple dozen single-line input boxes with the same name (i.e., the very creative "
if isinstance(addrs, list):
addr_list = addrs
else:
# maybe a TAB-separated string
addr_list = addrs.split('\t')
for an_addr in addr_list:
an_addr = an_addr.strip()
if len(an_addr) > 0:
do_one_addr(an_addr)
q", for "query"). So i gave Chris a choice of a big text box (as "
<textarea name="q" rows=16 cols=255> </textarea>") or a pile of single-line fields, as
<br/><INPUT TYPE=text NAME=q SIZE=128 MAXLENGTH=255>So he can use the textarea version in case he has a tab-separated list of addresses in a Mi¢ro$oft Word® document; if he's typing and using the TAB key at the end of addresses, he can use the version that has multiple single-line text inputs.
<br/><INPUT TYPE=text NAME=q SIZE=128 MAXLENGTH=255>
<br/><INPUT TYPE=text NAME=q SIZE=128 MAXLENGTH=255>
[[etc]]
I don't think I'd ever done that before. It sure was fun! I'm not going to tell you were everything is -- I don't want google maps to get a pile of traffic from my site and then blacklist me.
But Google + Python + that "How CGI works" site all made it easy to learn stuff and be productive in short order, even while on vacation. And hey, even Econo-Lodge has free wi-fi! (If you want it at Motel6, you need to pay $2.99/night, and it might be a little slow.) | http://collinpark.blogspot.com/2009/04/address-to-latitude-longitude.html | CC-MAIN-2017-34 | refinedweb | 859 | 74.79 |
driver provides support for hotkeys and other components of IBM laptops. The main purpose of this driver is to provide an interface, accessible via sysctl(8) and devd(8), through which applications can determine the status of various laptop components. While the sysctl(8) interface is enabled automatically after loading the driver, the devd(8) interface has to be enabled explicitly, as it may alter the default action of certain keys. This is done by setting the events sysctl as described below. Specifying which keys should generate events is done by setting a bitmask, whereas each bit represents one key or key combination. This bitmask, accessible via the eventmask sysctl, is set to availmask by default, a value representing all possible keypress events on the specific ThinkPad model. devd(8) Events Hotkey events received by devd(8) provide the following information: system "ACPI" subsystem "IBM" type The source of the event in the ACPI namespace. The value depends on the model. notify Event code (see below). Depending on the ThinkPad model, event codes may vary. On a ThinkPad T41p these are as follows: 0x01 Fn + F1 0x02 Fn + F2 0x03 Fn + F3 (LCD backlight) 0x04 Fn + F4 (Suspend to RAM) 0x05 Fn + F5 (Bluetooth) 0x06 Fn + F6 0x07 Fn + F7 (Screen expand) 0x08 Fn + F8 0x09 Fn + F9 0x0a Fn + F10 0x0b Fn + F11 0x0c Fn + F12 (Suspend to disk) 0x0d Fn + Backspace 0x0e Fn + Insert 0x0f Fn + Delete 0x10 Fn + Home (Brightness up) 0x11 Fn + End (Brightness down) 0x12 Fn + PageUp (ThinkLight) 0x13 Fn + PageDown 0x14 Fn + Space (Zoom) 0x15 Volume Up 0x16 Volume Down 0x17 Mute 0x18 Access IBM Button led(4) driver being loaded, only the Fn+F4 button generates an ACPI event. dev.acpi_ibm.0.eventmask Sets the ACPI events which are reported to devd(8). Fn+F3, Fn+F4 and Fn+F12 always generate ACPI events, regardless which value eventmask has. Depending on the ThinkPad model, the meaning of different bits in the eventmask may vary. On a ThinkPad T41p this is a bitwise OR of the following: 1 Fn + F1 2 Fn + F2 4 Fn + F3 (LCD backlight) 8 Fn + F4 (Suspend to RAM) 16 Fn + F5 (Bluetooth) 32 Fn + F6 64 Fn + F7 (Screen expand) 128 Fn + F8 256 Fn + F9 512 Fn + F10 1024 Fn + F11 2048 Fn + F12 (Suspend to disk) 4096 Fn + Backspace 8192 Fn + Insert 16384 Fn + Delete 32768 Fn + Home (Brightness up) 65536 Fn + End (Brightness down) 131072 Fn + PageUp (ThinkLight) 262144 Fn + PageDown 524288 Fn + Space (Zoom) 1048576 Volume Up 2097152 Volume Down 4194304 Mute 8388608 Access IBM Button dev.acpi_ibm.0.hotkey (read-only) Status of several buttons. Every time a button is pressed, the respecting bit is toggled. It is a bitwise OR of the following: 1 Home Button 2 Search Button 4 Mail Button 8 Access IBM Button 16 Zoom 32 Wireless LAN Button 64 Video Button 128 Hibernate Button 256 ThinkLight Button 512 Screen Expand 1024 Brightness Up/Down Button 2048 Volume Up/Down/Mute Button dev.acpi_ibm.0.lcd_brightness Current brightness level of the display. dev.acpi_ibm.0.volume Speaker volume. dev.acpi_ibm.0.mute Indicates, whether the speakers are muted or not. dev.acpi_ibm.0.thinklight Indicates, whether the ThinkLight keyboard light is activated or not. dev.acpi_ibm.0.bluetooth Toggle Bluetooth chip activity. dev.acpi_ibm.0.wlan (read-only) Indicates whether the WLAN chip is active or not. fan_level is not set accordingly. dev.acpi_ibm.0.fan_level Indicates at what speed the fan should run when being in manual mode. Values are ranging from 0 (off) to 7 (max). The resulting speed differs from model to model. On a T41p this is as follows: 0 off 1, 2 ~3000 RPM 3, 4, 5 ~3600 RPM 6, 7 ~4300 RPM dev.acpi_ibm.0.fan_speed (read-only) Fan speed in rounds per minute. A few older ThinkPads report the fan speed in levels ranging from 0 (off) to 7 (max). dev.acpi_ibm.0.thermal (read-only) Shows the readings of up to eight different temperature sensors. Most ThinkPads include six or more temperature sensors but only expose the CPU temperature through acpi_thermal) in order to pass button events to a /usr/local/sbin/acpi_oem_exec.sh script: notify 10 { match "system" "ACPI"; match "subsystem" "IBM"; action "/usr/local/sbin/acpi_oem_exec.sh $notify ibm"; };〉. | http://manpages.ubuntu.com/manpages/karmic/man4/acpi_ibm.4freebsd.html | CC-MAIN-2015-18 | refinedweb | 720 | 60.75 |
#include <Pt/Unit/Application.h>
Run registered tests. More...
The application class serves as an environment for a number of tests to be run. An application object is usually created in the main loop of a program and the return value of Unit::Application::run returned. A reporter can be set for the application to process test events. Reporters can be made to print information to the console or write XML logs. A typical example may look like this:
The TestMain.h include already defines a main loop with an application for the common use case.
Returns a pointer to the found test or 0 if not found.
Adds the reporter r to report test events.
Adds the reporter r to report test events of the test name testname.
This method will run a previously registered test. Use the RegisterTest<T> template to register a test to the application.
Registers the test test to the application. The application will not own the test and the caller has to make sure it exists as long as the application object. Tests can be deregistered by calling deregisterTest. | http://pt-framework.org/htdocs/classPt_1_1Unit_1_1Application.html | CC-MAIN-2017-13 | refinedweb | 185 | 67.86 |
Introduction:
In this article I will explain how to access or get master page controls from child or content page in asp.net
In this article I will explain how to access or get master page controls from child or content page in asp.net
Description:
In previous posts I explained many articles relating to asp.net, gridview, SQL Server, JQuery, JavaScript and etc. Now I will explain how to access master page controls from child page or content page in asp.net.
In previous posts I explained many articles relating to asp.net, gridview, SQL Server, JQuery, JavaScript and etc. Now I will explain how to access master page controls from child page or content page in asp.net.
To get master page control values in content page first write the following code in master page like this
MasterPage.Master
After that write the code in Content Page Default.aspx will be like this
After completion of Default.aspx page add following namespaces in codebehind
C# Code
Now add following code in code behind
VB.NET Code
Demo
13 comments :
Hi,
I have a textbox,dropdownlist and button in master page.I want to get the selected item value of dropdownlist and text of textbox in a content page.How to do this? I was unable to do this in the manner suggested by you.
I tried of using public properties also,but didnot find solution.Can you help me? Thanks in advance.
My code in content page is
TextBox tt = (TextBox)Master.FindControl("TextBox1");
DropDownList ddl = (DropDownList)Master.FindControl("DropDownList1");
int x = Convert.ToInt16(ddl.SelectedItem.Value);
string text = tt.Text;
Is this methiod is called "Typecasting" ?
Sir,
I need to fetch the asp.net menu control from master page to client page to change the menuitem text dynamically.
Please do the needful as soon possible.
Narendran N
Hi Guys,
I found the result after a little tryout with errors.
Menu yourMenu = (Menu)Master.FindControl("menubar1");
MenuItem mitem = new MenuItem();
yourMenu.Items.Add(mitem);
mitem.Text = "Text for my menu";
Narendran N
can I use
masterlbl.Text = lblContent.Text;
instead of
lblContent.Text = masterlbl.Text;
I want to put value obtained in default.aspx in masterpage control.
hi...
i have one menu control in master page, i am trying to access from content page for adding some more menu item...in asp.net 2.0
give me sample coding..
How to apply jquery in masterpage's content page....
Updating the master page when the control is within an update panel on the content page. i am trying lot but not get success ?
Hello sir,
I want to call a method by an anchor tag and the anchor tag is written at aspx.cs file with innerhtml.
For exapmle--
DivId.InnerHtml=InnerHtml+ "< a Clickme < /a >";
This code is written at aspx.cs file.
Please guide me how to call a server side method by this anchor tag.
I Need to fetch the value from child page to master page while child page loading , is it possible to do so?????
Javascript pop running that text box value how to get in server side | https://www.aspdotnet-suresh.com/2012/06/access-master-page-control-from-content.html | CC-MAIN-2019-30 | refinedweb | 523 | 69.48 |
Open Source Jobs
Open Source Jobs
Open Source Professionals
Search firm... transfers.
Open Source Job Schedulers in Java
On some...: Only 1% of jobs are filled through job
boards. Learn to grow and use your
java jobs Bangalore jobs
java jobs Bangalore jobs HOW TO FIND OUT HEAP MEMORY IN CASE OF JAVA PROGRAM
Java Jobs
Java Jobs Hi,
Is there sufficient Jobs for Java programmers in 2012? Which is the sites for applying for Java Jobs?
Thanks
Core Java Jobs at Rose India
Core Java Jobs at Rose India
This Core Java Job is unique among... with your job. You can work in Core Java and we will provide you
training
Java Jobs at Rose India
components.
Core
Java Jobs
So, if you know Core Java well then don't miss this change and apply now for
our Core Java Job offer...
Java Jobs at Rose India
Swing Jobs at Rose India
components.
Job Description for Java Swing developers:
You will be designing...
Exposure to XML is also must
Candidates must be proficient in CORE Java...
Swing Jobs at Rose India
jobs
Position Vacant:
Senior Content Writer
Job Description
Should be excellent in written and spoken English.
Should have knowledge of MS Office, good knowledge
How to find java jobs at any where in india
How to find java jobs at any where in india Hi,
I want information about java related jobs in india.where can i find that details at Rose India
Jobs
PHP/Java Developer (Trainee)
Should have knowledge of PHP...
Jobs at Rose India
...-effective onsite Java, Struts, Hibernate, Ajax and other related software
Web Designing Jobs
Web Designing Jobs
... and maintaining websites based on lifestyle, travel, career, jobs and education for different clients and its own.
Job Description: For designing websites, we need web
Jobs at Rose India
Jobs at Rose India
...-effective onsite Java, Struts, Hibernate, Ajax and other related software... Development Jobs
Candidate will be responsible for content creation
Fresher Job
and examples.
Job Description
We are looking for fresh java developers/technical writers excellent in Core Java, JSP and Servlets having excellent coding/Documenting... in Core Java, JSP & Servlets
Desired Profile
Senior Java Developer Jobs at Rose India
Senior Java Developer Jobs at Rose India
... to the
company's users.
Job Description for Senior Java...; Senior Java Developers Jobs:
Bachelor's,
Difficult Interview Questions Page -6
: There are some jobs in which traveling is the part of the
job especially marketing, survey and field jobs. But there are not so much
traveling in the in-office jobs. So this question is asked to you. Answer
according to the job
java job
java job How to get Java Job very easily
Core Java
Core Java Hi,
Can any one please share the code for Binary search in java without using builtin function
Hibernate Search
enterprise applications in Java technology.
Hibernate search integrates with the Java...
Hibernate Search
In this section we will learn about Hibernate Search, which is used create
JobStores
;
Job Store are used to keep track of all the "work
data" that you give to the scheduler: jobs, triggers, calendars, etc. The
important step..." on jobs and triggers. In
other words, this method's major deficiency
Core Java Interview questions and answers
Core Java Interview questions and answers
....
So, we have tried to create most frequently asked Core Java Interview Questions
and answers in one place.
These Core Java Interview Questions are supported
Project Manager Job
concepts are must
Candidates must have good knowledge of Core
Java... Project Manager Job
This Project Manager Job is actually Tutorial for Beginners
Core Java tutorials for beginners help you to learn the language in an easy way giving you simple examples with explanation. Core Java is the basic of Java... java and the other Advanced Java. Core java is basically used to develop
Introduction to Quartz Scheduler
brief features.
Quartz Schedular is extensively used for Job Scheduling in
java applications. Here, you will learn how Quartz Job Scheduler helps you to
develop job scheduling application in Java. We will show you the examples to use
Java Quartz Framework
Java Quartz Framework
Quartz is an open source job scheduler. It provides powerful
mechanisms for job scheduling. It can be used with any application using JSE
Search
Search Hi,
I have a project in which I am trying to enter "Marathi" (Indian local language) data in JSP using JSTL and trying to search data... and tries to search then It shows no data from database
Download Quartz Job Scheduler
) API for
the scheduler
src/java/org/quartz/core... Download Quartz Job Scheduler
In this section we will download Quartz Job Scheduler from
Job scheduling in Java
Job scheduling in Java Job scheduling in Java
In my new Java... of all please explain me what is job scheduling in Java and how one can use this feature in their application.
Example of Java job scheduling will really work
core
core where an multythread using
Please go through the following link:
Java Multithreading
job portal - Java Beginners
job portal 1)for creating the job portals which is better?(jsp/html).
2)how we done the validations?give me an example.
Hi friend,
This is JavaScript validation code.
registration form in jsp
Quartz Tutorial
;
In this Quartz Tutorial you will how to use Quartz Job scheduler in your java...
In this section you will learn about the importance of Job Scheduling in
your java... job scheduling application in Java.
Download Quartz Job
core java
core java how to display characters stored in array in core java
Linear search in java
Linear search in java
In this section we will know, what is linear search and how linear works.
Linear search is also known as "sequential... or a string in array.
Example of Linear Search in Java:public class LinearSearch
Top 10 Java Applications
Java is the wonder programming language that is widely used for most of the modern day computing jobs, from designing websites to game applications to preparing interactive video to any other application jobs. Without java applications we
Job scheduling with Quartz - Java Server Faces Questions
Job scheduling with Quartz I have an JSF application deployed... to database. It works fine but when the Quartz scheduler fires a job it accquires... while initialization or while calling the job. Hi,How you
Top 20 SEO Techniques for Google Search Ranks
and articles on every core topic is one of the top SEO techniques for Google search... experience for the user rather than so called backdoor tactics to hit search ranks. Though in presenting top SEO techniques for Google search ranks we are most likely
want to get job on java - Java Beginners
want to get job on java want to get job on java what should be prepared. To know java quickly. Just click the following links:
Core java
Core java difference between the string buffer and string builder
core java
core java please give me following output
core java
core java what is the use of iterator(hase next what is the max size of array?
You can declare up to maximum of 2147483647
core java
core java how to compare every character in one string with every character in other string
Core Java
Core Java What is the significance of static synchronized method?
Why do we have the method declared as static synchronized
The Need for Outsourcing
of contracting out
certain non-essential or non-core processes... and even massive cost reduction that results from outsourcing jobs,
processes... on their core expertise like IT, Hotels, Health etc while letting people
manage other
Advertisements
If you enjoyed this post then why not add us on Google+? Add us to your Circles | http://www.roseindia.net/tutorialhelp/comment/92647 | CC-MAIN-2015-32 | refinedweb | 1,285 | 62.17 |
Confessions of an Internet "Shock Jock" 194 194
An anonymous reader followed up on the Windows memory-leak fraud scandal, which is worth reading before you read the perpetrator's justification. "Randall C. Kennedy comes clean about his past, his relationship to Craig Barth and how it all came tumbling down. Includes an inside look at the politics of IDG and why you can never trust an IT publication that's as obsessed with page views as InfoWorld."
The downside of internet anonymity (Score:4, Insightful)
Re:The downside of internet anonymity (Score:5, Insightful)
There is no downside to internet anonymity, that would also exist without internet anonymity.
Re: (Score:2)
It seems to encourage comma splices, too!
/ grammar nazi, sorry
Re: (Score:2)
Is it any different than a Pen Name? (Score:2, Insightful)
Ben Franklin filled his paper with tons of his own writing.
RCK got it backwards. He should have written/blogged as another name. That would have protected his "first love" in a better manner.
I see it as confirmation that Blogging and the "Blogosphere" is an empty and thoughtless echo chamber.
Re: (Score:3, Funny)
Wow. Your brush was so broad, you tarred yourself in the process. Nice.
Re: (Score:2)
I disagree. People lie, cheat and are douchebags has nothing to do with anonymity but with being a lier, cheater and a douchebag.
Taking away would not be people who are douchebags, be less of a douchebag. The difference is that before we never knew how many douchebags there actually are.
Re: (Score:2)
Flame or clever witicism? It could go either way.
Not going to read it (Score:5, Funny)
Re: (Score:3, Informative) pas
Re: (Score:2)
And implode it did. After publishing a particularly alarming set of findings - which I still stand behind while continuing to evaluate new data - the internet became engulfed in controversy.
Awesome. He continues to demonstrate that he's technically incompetent as well as being a fraud.
Re: (Score:2)
Considering that just about everything this guy has ever written has been anti-MSFT FUD, I agree that it would be fitting to put one more line of bullshit as a disclaimer at the top of his post.
I would say the bigger questions is "What did InfoWorld know, and when did they know it?" because according to Thurrott [winsupersite.com] pretty much anybody who spent more than 2 minutes alone with the man knew he was full of shit, InfoWorld knew he was full of shit and basically said "we don't care, his FUD equals lots of traffic".
Re:Not going to read it (Score:4, Informative)
The real question is -- why should we trust *this* column from him, when he's been caught lying in the past? "This time it's the truth, really!"
Because .... (Score:2)
He is a true Microsoft fanboy. Anyone who gushes so thoroughly about how good Microsoft and its products are is simply deluding himself and doesn't have any other experience to compare it with. And everyone knows Microsoft fanboys with no comparative experience are more honest than
... well, honest people.
He brags about the money he made when that has nothing to do with his excuse for a mea culpa. It looks more like begging for attention.
He pretends to show how innocent and naive and gullible he is, blogg
Here I'll help (Score:5, Insightful)
After all, it’s not as if I had trafficked in nuclear secrets or or stolen someone’s credit card information.
"Look guys it wasn't so bad, I was just foolin, no big deal!"
I merely tried to shield what was important to me from the fallout of the world that had been created for me.
"I'm the victim here, but I'm still a manly man, look at my sacrifice, I'm jumping on the grenade here! (as I throw everyone close at hand under the bus)"
And in the end, I failed miserably.
"Please feel sorry for me now that I've abused your trust for years and years.".
Wait is he trying to say that he almost got away with it, man he wishes he got away with it?
Fuck this asshole forever. As if what he's already done isn't enough, he tells his life story like anyone gives a shit. "Ohhh look how much money I made I am so awesome and knowledgable no wait feel sorry for me I'm just a man—a very manly man—protecting his family. But seriously, I'm rich and super smart, oh by the way buy my product you can trust me. I promise I won't create any more personas to review my own product and tell you how great it is."
Re:Here I'll help (Score:4, Insightful)
He still refuses to admit his performance tool doesn't take into account Superfetch, and therefore the story about Windows 7 computers unnecessarily swapping was complete trash. You should see the twisting of words required to keep his tool's numbers plausible--
I think in the latest iteration of crap-slinging he's claiming that Superfetch is a bad idea because the best computer will have a tiny cache which contains only what it needs. Which is true I suppose... for your magical mind-reading computer... but here in the real world, a larger cache is better since your computer has no idea which bit of data it will need next.
During this, it's also come out that the analytics data sent by his tool is sent un-encrypted over port 80, and can be linked to the individual computer that sent it.
Total scumbag.
Re:Here I'll help (Score:4, Interesting)
Oh and if you haven't been following, the main cause of problems was (partially) that their tool was comparing committed bytes against physical bytes. The problem is that memory is committed against the pagefile, not physical memory... therefore it's quite possible for my computer to have:
4 GB total physical RAM
4 GB committed
3 GB available physical RAM
Via his tool, my computer would show up as memory 100% full, paging like mad. In reality, it's not paging at all. The only reasonable conclusion you can draw from that data is that my pagefile is at least 7 GB large.
Their tool was also measuring Page Ins as a stat, without realizing that memory-mapped files will trigger Page Ins even if they're already in memory. As happens with, for example, every
.exe file you run, since Windows memory-maps those first thing.
The guy claims to love Windows NT, but he sure loves to slander it... oh well.
Re: (Score:2)
Re: (Score:3, Funny)
Re: (Score:2)
Rough, guess, I'll RTFA now. But he is just that kind of guy...
Re: (Score:2)
having RTFA, you're pretty much correct. he talks a whole lot about things "that happened to him" and takes very little responsibility for the fact that he brought most of it on himself. he seems to blame infoworld for the damage caused to his reputation as the result of his writing an intentionally inflammatory and salacious blog, and uses that as justification of his creation of an 'alter ego'. and honestly, all that would have been fine if he hadn't then gone on to shill his pseudonym's product using
No Choice at This Point (Score:5, Interesting).
So, what next? For starters, neither the exo.performance.network or Devil Mountain Software, Inc., are going anywhere anytime soon.
Surely he must realize that open sourcing everything about exo.performance.network is the only thing he can do at this point. I mean, no one's going to trust him again if he has any way to manipulate the data/results without subject to complete inspection. The only option I see is to open source the software client and post the raw data alongside his own analysis. Without that I'm not stupid enough to trust an adoption rate quoted from this guy let alone average disk I/O queue on Windows 7. Without this kind of auditing, I'm sure those numbers will turn up to be just enough to make my eyes widen and my finger click his link. I am saddened that people will probably continue to run his client without knowing this whole story of how they were manipulated by a particularly crafty scam artist.
Re:No Choice at This Point (Score:5, Insightful)
.
Slashdot has people with most likely even more technical backgrounds. It tells something that he never tells what he has found (with his "reasonably technical background"), and that he acknowledged "XPnet's data couldn't determine whether the memory usage was by the operating system itself, or an increased number of applications". He didn't mention what kind of RAM usage is full, never said anything about SuperFetch or anything else. He practically knew nothing but just shout out bullshit. He even says it himself:
"The persona of Craig Barth was exposed as one Randall C. Kennedy, and the entire web of half-truths and misdirection was exposed as the ruse that it was."
This guy is still full of $hit (Score:5, Insightful)
Balancing the two worlds had become almost impossible, and I longed to escape from the "shock jock" persona that had been created for me...
I merely tried to shield what was important to me from the fallout of the world that had been created for me.
Sounds to me like this guy still is incapable of accepting responsibility for his own actions. If he can't accept responsibility for what HE created and what HE did, how is he ever going to have any measure of integrity?
-Rick
Re: (Score:2).
There's one point I keep raising and haven't seen an answer to. Win7 will use the page file to swap out running applications in favor of cache/superfetch. I see it regularly when I don't use an app for a while but leave it running; or minimize it to the task bar -- and have confirmed it with perfmon. So while technically it can be "explained" as a result of SuperFetch and caching, that doesn't invalidate the point that Windows is using memory to the exclusion of applications. Presumably it is trying to
His definition of "shock jock" (Score:5, Insightful)
His definition of internet "shock jock" appears to be closer to my definition of "unethical sack of shit," but why quibble over semantics.
Re: (Score:2)
I enjoyed the part where he frames the story as his 'fall from grace' and then goes on to detail how he got caught deceiving people.
Or "troll" (Score:2)
Actually, maybe it's just me, but it sounded to me like an euphemism for "troll". I mean, that's what we used to call the people who posted something shocking or inflamatory, to get attention.
Interesting (Score:5, Funny)
I've never seen a CV written in a format like that before.
Uh... (Score:5, Insightful)
"Includes an inside look at the politics of IDG and why you can never trust an IT publication that's as obsessed with page views as InfoWorld."
Or, say, Slashdot, which got InfoWorld half those hits by regurgitating it's bullshit in the first place?
Come on Slashdot editors- you can't post that quote, almost as if you're pretending that you're somehow innocent of this. You may been unwitting pawns in the InfoWorld hits game certainly, but you posted a FUD article about Android fragmentation just a day after InfoWorld had been outed as guilty of this and untrustworthy and that suggests that perhaps you enjoy leeching hits off their FUD as much as they enjoy generating them. So why pretend that Slashdot too doesn't use shock articles sometimes to try and increase hits?
Don't get me wrong, I like a lot of Slashdot articles else I wouldn't come here, but it's pretty obvious that some of them are inflammatory FUD (hell Slashdot posted the original article in question) and that others of them are Slashvertisments.
Slashdot's credibility absolutely has decreased over the years because of this, and so it may want to read the above quoted sentence and take some lessons from it itself to ensure it avoids ever heading the same way. I suspect that the editors play the biggest role in this by you know, doing some actual editing and checking the authenticity of the article they're about to post.
Re:Uh... (Score:4, Insightful)
Slashdot's credibility absolutely has decreased over the years because of this,
Credibility? You must be new here. Slashdot isn't about credibility, it's about discussion. Individual slashdot posters have or don't have credibility. Slashdot editors have never earned their titles.
I suspect that the editors play the biggest role in this by you know, doing some actual editing and checking the authenticity of the article they're about to post.
Again, YMBNH. They have never done this. Why start now? If anything has harmed slashdot's "credibility" it's the obvious slashvertisements.
Re:Uh... (Score:4, Interesting)
So why pretend that Slashdot too doesn't use shock articles sometimes to try and increase hits?
InfoWorld writes and generates news. Slashdot merely links to it and provides a discussion forum. Infoworld asks you to assume that it has credibility; Slashdot asks you to assume nothing except "this link might be interesting to technically-minded people."
You're right that Slashdot linked to the original article [slashdot.org] in this sorry mess. Infoworld claimed its conclusions were correct. Slashdot did not; it merely said, "Hey, look what Infoworld says" -- and then enabled a lengthy discussion of the merits and problems of the Infoworld article. Much of that discussion questioned Infoworld's results. Frankly, that's exactly what Slashdot is for. It actually is innocent in this.
Re: (Score:2)
Re: (Score:2)
Inasmuch as Infoworld puts software and hardware through tests, then yeah, maybe they ARE generating news by going out and crashing cars. (or servers, or something).
Re:Uh... (Score:4, Insightful)
"Frankly, that's exactly what Slashdot is for. It actually is innocent in this."
Well no, last time I checked, that's what Digg was about. Slashdot was about selecting wortwhile articles, that are actually worth reading, and weren't just FUD/advertisments.
Slashdot specifically selects articles, it filters articles, and it's the quality of that selection and filtering that I am questioning.
People come to Slashdot because they do not expect to have to deal with the turd that Digg churns out. Otherwise, if there is no filtering, and as you say, it's just about publishing any old thing and saying this might or might not be of interest, then they might as well just replace the front page with firehose and not bother wasting time having editors in the first place.
Re: (Score:2)
Otherwise, if there is no filtering, and as you say, it's just about publishing any old thing and saying this might or might not be of interest
I didn't say "any old thing". I think the original article's claim about Windows memory usage was very relevant to a lot of Slashdot readers. It wasn't up to Slashdot editors to decide if Infoworld conclusions were right; it was up to them to decide if Infoworld's conclusions were worthy of discussion.
But we may be talking past each other here.
Re: (Score:2)
I dunno... personally, I come to
/. for the entertainment that comments provide, not so much for the stories themselves - there are plenty other places where I can read the news alone, usually long before they even hit the front page here.
And in terms of comments, that story was certainly an interesting one.
Re: (Score:3, Insightful)
Re: (Score:2)
For once this story isn't about windows. It's about some guy who flat out lied to get a few more page impressions.
Re: (Score:2)
This whole affair started with this article: [slashdot.org]
Which most certainly was anti-Microsoft tripe of the sort Slashdot loves to post. The headline in the Slashdot article is the lie this guy told, which sadly worked.
Re: (Score:2)
The story was one guy willing to say anything just to get more people looking at a site. There are loads of people like that but somehow this one got noticed.
At least it makes a change from the pro-Microsoft tripe that appears in so many comments on Slashdot.
Re: (Score:3, Insightful)
Most of us did pick up that it was rubbish. We do prefer our anti-M$ rants to be based on facts.
Re: (Score:2)
Re: (Score:2)
Yes, it's right there in the Preferences.
Dynamic Index -> Exclusions
or
Classic Index -> Authors
Re: (Score:2)
I'm pretty sure there is, I think I've seen it before somewhere in the options, but whilst some editors are worse than others, there's no real consistency. Sometimes even the better editors post shite and every once in a while the shite editors post good stories.
Re: (Score:2)
I've never understood why anyone 'trusts' any company that gives them something for free. Their main goal is -always- to earn as much money as possible. Most of the time, that means being ethical because if they aren't, -this- kind of things will happen and destroy them. But some companies aren't that smart. And the ones that are smarter get away with little lies constantly.
Re: (Score:2)
Don't get me wrong, I certainly don't trust them, but that's exactly why I get annoyed- because to me, the slashvertisments and FUD articles are so blindingly obvious that it's annoying having to wade through them at all.
I do not trust The Register for the same reason, they heavily moderate and regularly don't allow publication of comments that give a counter-point to the original author on certain topics (global warming, file sharing) and certain authors don't accept comments on their articles at all (i.e.
Re: (Score:2)
Slashdot's credibility absolutely has decreased over the years because of this, and so it may want to read the above quoted sentence and take some lessons from it itself to ensure it avoids ever heading the same way.
Slashdot never had any credibility to lose. Editors are chosen based on some completely random factor I haven't yet determined (in kdawson's case, it was foaming-mouth hatred of Microsoft combined with willingness to spread lies, for example.) It's not like they're coming from the New York Times
Where's the "downfall" part? (Score:5, Insightful)
After the 96th paragraph about how "Major IT firm X comes knocking at my door", I realized this guy is your usual narcissistic fuck and stopped reading. The choice of phrases like "comes knocking at my door" tells me everything about this guy: he wants to clone himself so he can finally fuck someone worthy of his love.
Seriously. I did not need a thousand word sub-essay on Dvorak, Windows NT and NetWare. What a fucking retard.
Re: (Score:2)
Re: (Score:2)
Re: (Score:3, Funny)
Where's the "downfall" part?
You mean the part where Hitler starts yelling at his officers for listening to internet Shock Jocks and complaining about how much money he lost on this scandal? I bet it should be up in youtube by now.
Re: (Score:2)
...The choice of phrases like "comes knocking at my door" tells me everything about this guy: he wants to clone himself so he can finally fuck someone worthy of his love.
Boy, every now and then someone on Slashdot brings teh awesome. Hilarious!
Who? (Score:5, Insightful)
"“Most Reviled Person on the Internet, 2010 Edition.”", "while the future may see my name relegated to the role of punch line for a crude party joke". Sounds like this guy has a vastly overinflated sense of self-importance. Or maybe I don't spend enough time on the internet to know who the Most Reviled Person was and will be doomed to laughing uncomfortably trying to blend in at parties when people start busting out the Randall Kennedy jokes.
Re: (Score:2)
After publishing a particularly alarming set of findings – which I still stand behind while continuing to evaluate new data – the internet became engulfed in controversy
An over-inflated sense of self-importance, or a woeful ignorance of the scope of the interwebs. Then again, maybe we're just jealous because we haven't made a enough to make sure that "we never have to work again". Yes, I'm sure that's it... disregard my post, it was just my envy speaking.
Meanwhile... (Score:3, Interesting)
Robert Enderle still gets playtime on NPR.
Maybe it's better to just be an asshole than to be an asshole and try to hide behind a nom de plume.
--
BMO
Vista hatred was role-playing, flame-fanning (Score:2)
). "
Re: (Score:2)
Did anybody tell him there is all sorts of stuff between black and white? I means, compared to him even slashdots frontpage is full of nuance.
Who cares? (Score:2, Funny)
Re: (Score:3, Funny)
*I* don't.
Journalists report shock (Score:3, Insightful)
Journalists report shock not stories. They have always been willing to bend the truth to get more readers.
The wise man will always judge for himself.
Making things worse (Score:2)
PCs are increasingly complex and there are lots and lots of things that can go wrong with them. Users are desperate for explanations for why their particular machine doesn't seem to run as well as it used to or is supposed to. Snake oil salesman like this doofus make a living selling simple explanations to complex problems that seem logical but are often wrong. Sometimes not just wrong but maliciously wrong. Instead of helping they're just making things worse. And rags like InfoWorld are just as bad, overlo
Brevity (Score:2)
TLDR
And even scanning the text nearly bored me to sleep.
Geez! (Score:2)
Is an extra semi-colon also too long for you? It's "TL;DR" you short sighted sap.
Re: (Score:2)
TL;DRPTFBP [wiktionary.org]
What a piece of work (Score:4, Interesting)
He's got more name-drops than an Oscar speech (Score:2)
This guy's rambling post reminds me of every last name-dropping, frat-boy, asshole I've ever worked with. He drops more names, completely at random, than your stereotypical Hollywood Agent. He must have had some really good editors throughout the years, because I can't imagine reading an entire book by this clown. Maybe this is what passes for journalism in the perpetually retarded, and wrong, "IT Analyst" industry.
SirWired
danville (Score:2)
Never heard of him until today? (Score:2)
Judging from the content and length of his article I can see why, if I had run across anything he'd written in the past I'd stop reading it two paragraphs in.
Most importantly, *DONKDONK* Law & Order, were you lying then? or lying now? I'm guessing both.
Controversy Sells: Personal Experience (Score:5, Insightful)
Boom! 300 page views that month. A dozen comments. Flamewars and fans.
If I'd been earning money from that blog, you bet I'd have taken a hint and continued to write things about how Obama is a commie, Glenn Beck should head an armed invasion of those baby-eating godless socialists in Europe, minorities are shifty, oil companies are conspiring against hamsters, and gays are actively plotting against our way of life every time they go Satan-worshipping on moonlit nights. Real me wouldn't stand for any of those, but real me - the regular guy who lives and lets live - doesn't sell as well.
Fox and MSNBC are more attractive investments than middle-o'-the-road CNN. The New York Times is doing all it can to survive, while the Sun and the National Enquirer sell on like it's 1970. Trash sells. I blame the man, but I also pity him. Only human, and as LotR says, the hearts of men are easily corrupted.
what difference does it make. DATA MATTERS (Score:2)
so he was barking with randal c kennedy persona to sell the data he produced legitimately with his real, craig barth identity.
what the fuck does it matter in regard to data, whether he was putting out a second, fake persona to advertise it ? the data wont change with the nature of advertisement, its still data. if the data is solid, it means it is valid. if the data is supported by similar findings from other sources, then noone can question the data.
Re: (Score:2)
Delusions of a Dickhead (Score:2, Interesting)
Just like George Costanza couldn't pick his own nickname ("T-Bone"), YOU cannot decide who the most "notorious shock jock" is. Until I heard about your lying bullshit, I had ne
Re: (Score:2)
Agreed. And we all know that Dvorak is infinitely more famous for writing complete BS for the sole purpose of getting people riled up to increase his page views.
Automated Blocks? (Score:2)
From what I understand, the slashdot submission process could be modified to include an automatic filter for blacklisted sites. Couldn't news aggregator (such as Slashdot) ban Infoworld? While you are at it, block that website that posts biased game reviews.
Re: (Score:2)
Wouldn't that be ALL of them? Well, at least all those that have advertisement paid by game publishers and developers.
tl;dr (Score:2)
Actually I did skim it, and it looks like the relevant pieces start 2 paragraphs prior to the "A Slippery Slope" section, halfway into the novella. At least they didn't paginate...
What a jackass (Score:5, Insightful)
Well, if you strip away the self-important tone of TFA, it boils down to this:
A guy with a technical background discovered the rush of trolling a large audience. The major difference between this and a large segment of
/. readers is, he did it under a journalistic guise - which makes him an unethical asshat whereas the /. trolls are merely run-of-the-mill asshats.
So then he tried to have his cake and eat it too: he wanted to enjoy the respect of his peers in technical endeavors while still having his fun as an asshat blogger. So, big surprise, it backfired and now he's lost the respect of his peers.
As for the Windows 7 RAM usage data - he may well have reported that in good faith, but it doesn't matter because of who he'd chosen to become. (As much as he tries to sound like he was drawn into his situation, ultimately he chose to be what he was and is; this article really just shows that while he may be resigned to the consequences, he hasn't truly accepted responsibility.) Maybe he really has reason to believe his findings, or maybe the desire to save face is coloring his view. (He certainly wants some measure of justification; I guess it's easier to feel that it's all unfair if the story that gets you caught was a case where you were factually correct.)
Re: (Score:2)
I disagree. I may be a run-of-the-mill or even outright shitty troll here on slashdot, but lets face it, we have a slightly higher than normal troll quality level here, so my shitty slashdot troll is a gold medal winner on most of the rest of the Internet
:)
Re: (Score:2)
Well, if you strip away the self-important tone of TFA, it boils down to this:
The confession and semi-apology wouldn't exist if he had not been outed.
In other words (Score:2)
Dear internet: YHBT.
And what's the number one rule for dealing with trolls? Don't feed them.
tl;dr (Score:2)
There. I just saved you 20 minutes of wading through his long winded e-wanking.
-B
Time to move on... (Score:2)
I think some people are being a bit harsh. Self important? Definitely. Made bad decisions? Definitely...
The guy came right out and admitted what he did, and people make mistakes. It's very difficult to understand a situation unless you have been in that person's shoes.
He's gotta deal with the fallout over what he did, professionally and in public - and IMO, that's enough.
I guarantee that there are worse assholes posting less credible information all over the place. The moral of the story is that if you buy
Re: (Score:2)
The guy came right out and admitted what he did
No he didn't. He got caught and outed after carrying out a professional deception for years on end, and to his financial benefit. That's not "people make mistakes", that's being a grifter. The fact that his accomplices (the editors at Infoworld) aided and abetted him does nothing to excuse him.
I call BS (Score:4, Insightful)
Randall Kennedy writes for a trade publication that presents itself as an authority in their space. I've read several of his posts in the past and wasn't shocked by his outrageous attitude, but by the poor thinking and conclusions he presented. That's shocking all right, but not in a good way. I unsubscribed from Infoworld after realizing they cared more about their click through rate then the quality of their "journalism."
Howard Stern is, for arguments sake, the original shock jock. Expresses his personal opinion on a radio show that is clearly identified as an entertainment program, no more, no less. His opinion of dwarves is not going to affect someones purchasing decision.
Frankly, I lay the blame at the feet of InfoWorlds editor. Read the comments on any of Kennedy's articles and you realize that the editor must have clearly known the audience found Kennedy's opinion's suspect. Clearly the page views were more important to them then the quality of their offerings.
I'd blame Darrel and Ron (Score:2)
"...It was there that I cut my teeth on technologies like NetWare, LAN Manager and SCO UNIX.
..."
Ah, so you can't blame the guy; he's been working for two of the biggest FUD factories of the past 10 years.
Where's the punchline? (Score:2)
I hope this article is a joke; it's the thing that would make this story interesting.
Screw him (Score:4, Insightful)
1.) He knew what he was doing was scummy.
2.) He continued to do it anyway.
3.) It ruined his reputation.
4.) He wished he hadn't done it.
5.) Instead of eating shit for doing something stupid, he whips up a new name and used it to be 'reputable'; except he is not reputable. And he instead further proved how disreputable he is.
I'm not familiar with him, his blog, or much anything else to do with this story, but this is what you get when you behave poorly. So take your smug ass and your piles of cash, fuck off, and go away.
No one trusts you anymore, nor should they.
You rate right up there with every loser CEO who thinks he can do wtf he wants because he has piles of money and need not regard anyone around him.
Bastard.
Re:Can you malloc(0x200000000) ? (Score:4, Insightful) [microsoft.com]
Re: (Score:3, Informative)
Re: (Score:3, Informative)
Just malloc'd 5GB on my 6GB Windows x64 machine, worked fine.
Here's the program, which I compiled with Visual C++ 2008:
#include "stdio.h"
#include "tchar.h"
#include "malloc.h"
int _tmain(int argc, _TCHAR* argv[])
// in GB
{
__int64 allocsize = 5;
void* pMallocated = malloc(allocsize * 0x40000000);
if (pMallocated)
{
_tprintf(_T("Successfully all
Re: (Score:2)
On an unrelated note, a hint: since you're using VC++2008 anyway, use "long long" and "%LLd" printf specifier instead. It's been supported since at least VC++2005, and is much more portable (being in C99 and C++0x, and all).
Re: (Score:2)
Not guaranteed to be 64 bits though. No stock C++ types are guaranteed to be any size, which is actually horrible for cross-platform code.
Re: (Score:2)
No he isn't, God is cruel and heartless. If he was "just" no-one would fight wars, he would just smite the "bad" side. Instead he allows people to die by the millions.
Re: (Score:2)
Of course 64-bit Windows and Linux can malloc() more than 4GB. Why else compile an application for 64-bit? Even better, unlike LoseThos they can malloc all your free ram as if it was one contiguous block, because they actually support Virtual Memory.
LoseThos seems to trash any and all attempts at process separation made in modern CPUs and OSs. Any process run on the machine can crash the whole system, or even trash the system files, making it unbootable. It's just not practical for a desktop OS. It's ok if
Re: (Score:2)
Re: (Score:2)
Yes, you can. Well maybe. I'm not positive about a single alloc request of that size, but Windows and FreeBSD will be happy to allocate more than 4 gigs to a single process via multiple allocs. I can't recall ever preallocating that much, but I'd be surprised if it didn't work.
I've done so with both FreeBSD and Windows, and both will even go so far as to overcommit and allow the alloc to succeed even though they don't have 4 gigs of ram in the machine, just 64 bit kernels.
Re: (Score:2)
Interestingly enough, in OS X, you can use a 32 bit kernel and still run 64 bit apps that use more than 4G of ram, even though the kernel can't.
Not sure WHY they went this route, perhaps it saves a little ram on pointers, its neat either way though.
Some Friendly Advice to Make Slashdot Enjoyable (Score:5, Insightful)
to comment (4 or 5 months ago) that IDG news is a biased, paid up, propagandist, political mouthpiece. I was modded as a troll, back then.
I'll bite. I skimmed through your comments looking for this -1, Troll claim that you have made and was unable to find it. According to Google (not an authoritative source) I can only find one comment in which you name IDG [slashdot.org] and it's not modded Troll, it's modded Offtopic. Nor does it rest at -1, merely at 0. There's an important difference between the two. You may have had a legitimate point it just had no place on that article for Slashdot. I suspect that if you had compiled a list of examples that would conclusively lead the reader to agree with you, you might have even gotten a +2 Interesting.
... but who should be the ones laughing in those situations? Probably the people who are employed.
I've noticed unfortunately that, when you do cite sources, it appears as though you're trying to pound a square block into a round hole [slashdot.org]. Be careful not to look for things to prove you're right but instead to read many things about the subject before concluding that there is evidence from reliable sources or maybe your viewpoint needs adjustment.
I have several friends from India, they have never complained of the media [slashdot.org] bashing [slashdot.org] India. I cannot say I've noticed this beyond jokes about outsourcing and telemarketing
On top of that, you throw out the sporadic groundless conspiracy [slashdot.org] which can hurt your message:
No popular Indian newspaper reported anything like that. I'm pretty sure that this news has been created by the manipulation wing of CIA and published by its media partners. Those filthy bastards don't like to be idle. Now that they've exhausted all the crap they can publish about China, they've turned towards India. Please don't believe them.
Listen, if you have a message to get out, that's fine. But a short post with such large conspiracy claim is often outright dismissed.
Your comments are often curt and therefore don't have a lot of content. This results in you lashing out at your reader [slashdot.org] which violates the know-your-audience rule of writing and often brings nothing new to the discussion [slashdot.org].
My biggest advice to you is to add more meat to your comments and don't get in little pissing matches with long back-and-forths between you and another poster. People don't enjoy reading ping-pong matches. Think out your argument or claim ahead of time and account for all viewpoints from the get-go. That's my advice. You rarely see me post more than one or two comments per article and it's not because I don't read the responses, it's because I come here to say something, I say it and then I'm done. Anything I missed was an error on my part and I deserve the valid rebuttal.
I know this post looks like a direct criticism or attack on you but it's not. It's meant to be constructive criticism because you have some real gems in your posts but every so often get really careless or resort to name calling or make outrageous claims with no proof. If someone had convinced me that this Randall C. Kennedy guy was a complete bullshitter months ago, I would have loved to have known ahead of time.
Re: (Score:2)
Wow, you should sell Slashdot posting evaluations. | http://news.slashdot.org/story/10/02/24/1348202/confessions-of-an-internet-shock-jock?sdsrc=rel | CC-MAIN-2015-27 | refinedweb | 6,359 | 71.24 |
Source code encoding under Qt5
Hi,
I am trying write (or more correctly: learn new Qt) a application with some Polish words (ąśćółęńżź, etc). In Qt4 I was using tr() and setCodecForTr() to make Polish words correctly visible, but now I cannot use setCodecForTr() anymore and setCodecForLocale() cannot fix my problem. What to do?
My whole system is (I believe) in UTF-8.
Nothing special need to do, it will works by default. If the exec-charset of your your compiler is UTF-8.
On Qt 5, for the libraries itself, UTF-8 is the default encoding as of a couple of days[1]. It's highly recommendable to switch to UTF-8 for your own sources too.
fn1.
[quote author="1+1=2" date="1338322461"]Nothing special need to do, it will works by default. If the exec-charset of your your compiler is UTF-8.[/quote]
For some reasons it doesn't work. Polish letters are invisible for me. If I compile the source code under Qt 4.8 I will see "not-encoded" UTF-8 letters (2 weird chars instead of one Polish letter).
[quote author="Volker" date="1338323417"]On Qt 5, for the libraries itself, UTF-8 is the default encoding as of a couple of days[1]. It's highly recommendable to switch to UTF-8 for your own sources too.
fn1.[/quote]
My source code is in UTF-8 for many, many years so until Qt Creator 2.5 doesn't make any mistake here (but after compiling project under Qt 4.8 I believe Qt Creator works just fine) this source code is also in UTF-8.
EDIT:
What is weird, if I create label in Designer I see Polish words correctly but after changing text of label in source code Polish letters are invisible.
IMO, you need to provide more information.
- the SHA of Qt5's source code
- The compiler you used. Don't told us you used something like
@
setCodecForTr("Something other that utf8")
@
in Qt4.
If so, obvious your exec-charset is not UTF-8.
- Make sure abc is utf-8 encoded bytes.
@
char abc[]="ąśćółęńżź";
@
BTY, Note that, you should make sure that your exec-charset is utf-8. exec-charset may be different from input-charset.
[quote author="1+1=2" date="1338327856"]IMO, you need to provide more information.
- the SHA of Qt5's source code
[/quote]
Qt git directory: 9985003ac4a42adfa35db286eda1b2ae9656d85b
qtbase: ac16d722140661cd21949ca321b659ba2c359388
[quote author="1+1=2" date="1338327856"]2. The compiler you used.[/quote]
@$ g++ -v
Using built-in specs.
COLLECT_GCC=g++-4.7
COLLECT_LTO_WRAPPER=/usr/lib/gcc/x86_64-linux-gnu/4.7/lto-wrapper
Target: x86_64-linux-gnu
Configured with: ../src/configure -v --with-pkgversion='Debian 4.7.0-9' - --with-arch-32=i586 --with-tune=generic --enable-checking=release --build=x86_64-linux-gnu --host=x86_64-linux-gnu --target=x86_64-linux-gnu
Thread model: posix
gcc version 4.7.0 (Debian 4.7.0-9)@
[quote author="1+1=2" date="1338327856"]Don’t told us you used something like
@setCodecForTr("Something other that utf8")@
in Qt4.
If so, obvious your exec-charset is not UTF-8.
[/quote]
In Qt4 in every single project I always add this:
@QTextCodec::setCodecForTr (QTextCodec::codecForName ("UTF-8"));@
But problem isn't with Qt4 (where I always saw all chars, but without this line just encoded incorrect) but with Qt5 where I don't see Polish letters when I wrote them in source code.
But how check this exec-charset (as far as I know this property have UTF-8 as default value)?
[quote author="1+1=2" date="1338327856"]
- Make sure abc is utf-8 encoded bytes.
@
char abc[]="ąśćółęńżź";
@
[/quote]
@matthew@pingwinek:~/tmp$ cat main.cpp
#include <iostream>
using namespace std;
int main()
{
char abc[]="ąśćółęńżź";
cout << sizeof(abc) << endl;
return 0;
}
matthew@pingwinek:~/tmp$ g++ main.cpp -o cpp
matthew@pingwinek:~/tmp$ ./cpp
19@
So yeaaaa... my source code IS in UTF-8, I just doesn't see Polish letters in labels in Qt projects when I set label text in source code (in Designer everything is fine).
You can write a simple example like this
@
#include <QApplication>
#include <QLabel>
#if _MSC_VER >= 1600
#pragma execution_character_set("utf-8")
#endif
int main(int argc, char *argv[])
{
QApplication a(argc, argv);
QLabel label("ąśćółęńżź");
label.show();
return a.exec();
}
@
If other people can reproduce your problem, you can file a bug.
I get something like this (window style: oxygen, app style: plastique because of @Old plugin format found in lib /usr/lib/kde4/plugins/styles/oxygen.so@):
For me it looks like Latin1 or Windows CP-1251.
But maybe problem isn't in Qt itself but in Qt configuration? Maybe font is wrong (serifs? Why there are serifs when my whole system use verdana?)? I didn't set up anything, just use default.
EDIT: Interesting... even if I manually change font of label (in designer or in source code) it doesn't change anything in binary. | https://forum.qt.io/topic/17149/source-code-encoding-under-qt5 | CC-MAIN-2018-30 | refinedweb | 828 | 67.45 |
Just a few words of our powerful features. JUPITER SDK is the easiest way to implement streaming audio into your Windows Phone project.
Three step implementation for a Windows Phone 8.1 Streaming Player:
using JupiterSdk; namespace TestApp { public sealed partial class MainPage { public MainPage() { InitializeComponent(); // Activate your copy of Jupiter Sdk LicenseActivator.Activate("Enter your product key here"); // Create a Jupiter AudioPlayer var audioPlayer = new AudioPlayer(); //Create some tracks var track1 = new Track("RadioStream1", ""); var track2 = new Track("RadioStream2", ""); var track3 = new Track("LocalFile" , "ms-appx:///Assets/song.mp3"); var track4 = new Track("NetworkFile" , ""); //Create a playlist var playlist = new Playlist("MyPlaylist"); // Add tracks to the playlist playlist.Tracks.Add(track1); playlist.Tracks.Add(track2); playlist.Tracks.Add(track3); playlist.Tracks.Add(track4); // Enjoy your music! audioPlayer.Play(playlist); } } }
We provide different options for developers for the integration of JUPITER SDK:
Jupiter Sdk does not improve the Audio Codec Support of Windows Phone (See Supported media codecs for Windows Phone 8). However, even when the audio codec of a specific network stream is supported by Windows Phone, it may contain metadata resulting in an unsupported media source for the audio player. Jupiter Sdk deals with this problem by separating the stream in pure audio (which is given to the audio player for playback) and plain text metadata available to the developer through the Jupiter API.
Jupiter Sdk implements a class named AudioPlayer which is able, among other interesting features, to establish and maintain a http communication with a specific radio station. In order to do that, you only have to specify the Url of the station. If the connection succeeds and the content-type of the http response is adequate, the audiostream will be ready to be played.
The AudioPlayer class implement an event called MetaDataChanged and a property named MetaData. This property returns a class that implements an interface IMetaData. The default MetaData class called JupiterMetaData, but you can that your own MetaData class to parse the extracted metadata on your own.
Here are a sample code:
public class MyMetaData : IMetaData { DecoderTypes DecoderType { get; set; } string ExtractedData { get; set; } }
this.audioPlayer.MetaData = new MyMetaData();
this.audioPlayer.MetaDataChanged += OnAudioPlayerMetaDataChanged; private void OnAudioPlayerMetaDataChanged(object sender, object e) { // myMetaData this.myMetaData.Parse(); CurrentTitle = ((MyMetaData) this.audioPlayer.MetaData).Title; CurrentArtist = ((MyMetaData) this.audioPlayer.MetaData).Artist; // or use JupiterMetaData ((JupiterMetaData) this.audioPlayer.MetaData).Parse() CurrentTitle = ((JupiterMetaData) this.audioPlayer.MetaData).Title; CurrentArtist = ((JupiterMetaData) this.audioPlayer.MetaData).Artist; }
No. Jupiter Sdk will recognise automatically the characteristics of the stream and decode it according to the appropriate protocol.
In order to protect your copy of Jupiter Sdk from malicious use, your Publisher Id (and also your Package Name in the case of Single Project License) are required at purchasing time.
Once you integrate Jupiter Sdk in your app, its license engine will check at runtime the identity of the publisher (and package if applies), allowing the Jupiter libraries to run only if this verification is succesful.
Your Publisher Id and Package Name can be found in the Package.appxmanifest file of your project. If you open this file in code view you will find both fields in the Package.Identity section under the attributes Publisher and Name, respectively:
<package xmlns="" xmlns: <identity name="Your Package Name" publisher="CN=Your Publisher Id" version="1.1.1.0" /> ...
The Publisher Id and the Package Name of your App will be overriden by the Windows Phone Store during the publishing process.
For this reason, we strongly recommend first to associate your app with the Windows Phone Store, and then use the resulting values of Publisher Id (and Package Name for Single Project License) for your purchase at
The Jupiter Sdk license can be obtained in three different flavours:
No problem, Jupiter Sdk integrates an internal pls/m3u parser. Once the http client detects that the given url leads to a pls/m3u file he will give the file to the parser and the background engine will sequentially try each entry of the pls/m3u until a playable stream is found. (Note: It is assumed that every entry in the pls/m3u file provide the same audiostream).
Apart from the usual functions of every audio player (i.e. play, pause, Next, Previous, Stop, Repeat, Shuffle, etc.), the Jupiter Sdk AudioPlayer offers the possibility of recording live audiostreams, saving the recordings persistently in the Phone and playing them. Another interesting feature of Jupiter Sdk is the possibility of creating, managing and playing playlists.
You just have to get a copy of Jupiter Sdk in any of its available license types: trial, single project or unlimited. After that, you will need to reference the Jupiter Sdk assemblies into your project, declare the BackgroundAudioTask in the application manifest and you will be ready to go.
Yes, we provide a demo project of Jupiter Sdk whith full access to its functionality. The only limitation is that any playback will be automatically stopped after 30 seconds. Moreover, for this purpose you can use the Jupiter App available at the Windows Phone Store and to evaluate the rest of the features of Jupiter Sdk.
Each of our licenses include 1 year free updates. If you want to keep receiving new updates and support you can do so by paying a renewal fee which is about 25% of the current license price. If you don't renew your license you can still keep using the version you have on hand. It will keep working without any restrictions. If you want to keep receiving new updates or if you have any additional questions, please contact us! We are happy to help!. All fields are required | http://jupitersdk.com/ | CC-MAIN-2016-40 | refinedweb | 937 | 54.12 |
gwibber does not refresh Facebook feeds
Bug Description
Since around November 28th, Gwibber stopped updating my Facebook feeds. First, I tried to delete my facebook account from online accounts and than uninstall and reinstall gwibber. Then, I removed ubuntu from facebook app settings after doing all of the above. Then I checked my proxy settings, and my proxy is on none. When I open gwibber, I see feeds that are 12-13 days old and when I try to refresh, it does not do anything. I mean it does not even write refreshing... at the bottom of the screen. When I check if gwibber-service is running correctly I get no mistake in the terminal. I've looked a lot around bugs sections in many sites but none of them had an answer.
Steps to verify this SRU:
[Test Case]
Ensure gwibber-service has restarted since the update by either a logout/login or "killall gwibber-service", then verify there is feed data for your facebook account.
[Regression Potential]
Regression potential is really low, it simply checks that the dict returned has a key, and if it doesn't use an empty value for the result.
Yeah, I can post status messages to facebook too, but I can't refresh. Didn't try with twitter cause I don't have a twitter account.
Gwibber is also broken on my setup. I only use Facebook and set it up via the Online Accounts menu. Empathy takes care of the chat just fine, but Gwibber stays completly blank with a "Refreshing" displayed on the bottom left corner of the window.
Importance of this bug should be set quite high considering how popular Facebook is.
After checking the log file for gwibber I notice that the first occurance of this problem happened on November 27, 2012 at 20:28:36. If I'm reading the log file correctly it's a problem with the dispatcher. To fully describe the first error in the log file I will paste it right here, "2012-11-27 20:28:36,132 - Dispatcher Thread-42 : ERROR - <facebook:receive> Operation failed".
As well I will attach my log to this comment for you to look at. Maybe that will help you fix alot quicker.
thanks for the bug report
after reading the log file
2012-12-12 17:37:15,381 - Dispatcher MainThread : INFO - Running Jobs: 5
2012-12-12 17:37:15,382 - Dispatcher MainThread : INFO - Running Jobs: 0
2012-12-12 17:37:15,386 - Dispatcher MainThread : INFO - Running Jobs: 4
2012-12-12 17:37:16,348 - Dispatcher Thread-23 : INFO - Loading complete: 17 - 0
2012-12-12 17:37:16,727 - Dispatcher Thread-24 : INFO - Loading complete: 18 - 0
2012-12-12 17:37:17,654 - Dispatcher Thread-25 : INFO - Loading complete: 19 - 0
2012-12-12 17:37:21,425 - Dispatcher Thread-22 : ERROR - <facebook:receive> Operation failed
2012-12-12 17:37:21,426 - Dispatcher Thread-22 : INFO - Loading Error: 19 - Erro
assigning the bug to kenvandine
This problem is affecting all versions of ubuntu with gwibber. The problem with adding the facebook account also is related to the facebook api allocation limits and there was no need to fix that issue. When the facebook api allocation is fixed for the gwibber and ubuntu app on facebook, all the issues are fixed.
Its a very annoying problem since its happening on an application shipped by default with the OS.
its very bad problem :( doesn't fix yet ?
Yes... Same problem with 12.10 and 13.04 development and more important, Ubuntu 12.04.1 which is a LTS!
And no news about that for a few weeks ago...
This has happened many times before and as i said the problem is from facebook. Gwibber has reached the limit of facebook api allocation.
But yes, the problem is very serious because this a feature included on stock system and it doesnt work.
The problem is happening on all versions from 10.04 up to development 13.04.
It might be a good idea to have an error message explaining what's wrong in the meantime, so users will know that it's not their fault.
#10 : so concretely? Any more Gwibber for Facebook?
For the moment Gwibber is unfortunately not much more than a microblogging client for twitter (and identica, which I don't know) :-(
Sadly there's no alternative for GNOME2 to tap the Facebook feeds !??
Have all the developers and maintainers of the middleware between Facebook and Gwibber been fed up to the back teeth with Facebook and have steamed away now ?
Same problem here. Here is the tail of my gwibber.log
2012-12-28 19:44:28,373 - Dispatcher MainThread : INFO - Dispatcher Offline, suspending operations
2012-12-28 19:44:29,885 - Storage MainThread : INFO - Cleaning up database...
2012-12-28 19:44:30,239 - Storage MainThread : INFO - Cleaning up database...
2012-12-28 19:44:32,816 - Dispatcher MainThread : INFO - Network state changed to Online
2012-12-28 19:44:32,819 - Dispatcher MainThread : INFO - Dispatcher Online, initiating a refresh
2012-12-28 19:44:43,533 - Dispatcher MainThread : INFO - Running Jobs: 1
2012-12-28 19:44:43,534 - Dispatcher MainThread : INFO - Running Jobs: 0
2012-12-28 19:44:43,536 - Dispatcher MainThread : INFO - Running Jobs: 1
2012-12-28 19:44:51,861 - Dispatcher Thread-2 : ERROR - <facebook:receive> Operation failed
2012-12-28 19:44:51,862 - Dispatcher Thread-2 : INFO - Loading Error: 0 - Error
The problem is that the feed doesn't contain a "description" sub-key for the "privacy" key.
Obviously this was present before but is no longer present.
But the code depends on it being there.
Workaround:
sudo vim /usr/share/
replace line 329 which reads
m[
by
if data["privacy"
else:
This is not a suggested fix as I don't know whether the else is needed as I don't know whether some other code depends on "description" being set in m.
It works for me! :D Thank you very much for this patch.
To be clear, my fix is for version 3.6.0-0ubuntu1 of package gwibber-
To verify that you have the same problem, run the following two commands
killall gwibber-service
gwibber-service -do
Now you should get some output during the run that reads like:
Dispatcher Thread-5 : ERROR <facebook:receive> Operation failed
Dispatcher Thread-5 : DEBUG Traceback:
Traceback (most recent call last):
File "/usr/lib/
message_data = PROTOCOLS[
File "/usr/share/
return getattr(self, opname)(**args)
File "/usr/share/
return [self._
File "/usr/share/
m["
KeyError: 'description'
The 3rd line from bottom of this excerpt gives you the faulty line which is 329 for me.
"File "/usr/lib/
for res in getaddrinfo(host, port, 0, SOCK_STREAM):
IOError: [Errno socket error] [Errno -2] Name or service not known
Dispatcher Thread-1 : INFO Loading Error: 0 - Error
Dispatcher Thread-1 : DEBUG <facebook:receive> Finished operation (0:00:40.388210)"
Not the same error...
You got that error, applied my fix and it worked?
If so I guess this was a coincidence.
Your error says that the Domainname of something gwibber wanted to connect to couldn't be resolved via DNS.
That error was because of an wireless problem. ;)
I finally have the same error than you without your fix.
@Vampire
Your Workaround fixed the problem for me (Gwibber 3.60). Thanks!
@Vampire
the fix
To replace
m[
by
if data["privacy"
else:
Worked,Much Appreciated
the line number in the default Gwibber version in the 12.04LTS was 210
Great Vinu,
my fix is for the default Gwibber version in 12.10 to which I just upgraded from Karmic. :-)
I can confirm the code on #23 fixes the facebook feed problem on ubuntu 12.04LTS
Please release the fix on update manager and if possible backport it to previous versions of ubuntu using gwibber.
If this fixes the problem then why hasn't Ken VanDine the assigned bug fixer fixed it yet and changed the bug status? I really don't know anything about programming in python and really feel ill-equipped to use this fix that you all have come up with. So I think I will just wait for the updated version to be realeased. So please Ken VanDine try and get this bug eradicted. As well if you could review the fix the people above have come up with and explain if it is safe and what it does? My main concern is that it will mess with the security and privacy of gwibber. At least that is what the block of code tells me when I look at it. However I really don't mean any disrespect to anybody, especially if it does fix the problem. I just need a little more information about this fix before I make the change myself.
Vampire your solution is great!! You are the best!! Thanks!!
@Theodore
When Gwibber gets the answer from Facebook, it checks whether the answer contains a key named "privacy".
If the feed contains a key named "privacy" it reads out its subkeys "description" and "value" unconditionally, assuming they are always present.
For the "description" subkey this is not true, as it is not present, or not present anymore.
My WORKAROUND, NOT FIX as people call it, just checks for the presence of the "description" subkey before trying to read it and if it doesn't find it sets it to "Unknown". Because I don't know whether this description is relied on elsewhere in the code I do the else path with "Unknown". If the description is not used anywhere, the else path can be left out. Because of this uncertainness and because I don't want to look through all the code to find out, it is a WORKAROUND, not a proper FIX.
But it works find and doesn't mess up the security or privacy of Gwibber.
if data.has_
m["privacy"] = {}
The above is the code block in the file and as I stated I don't really know python so I'm asking if you could show me how to adjust this block to use your work around.
The workaround, not fix, from #16 is working on an Ubuntu 12.10 installation, and seeing Facebook updates. Thanks.
@Theodore, as I wrote it, just replace
if data.has_
m["privacy"] = {}
m[
m[
by
if data.has_
m["privacy"] = {}
if data["privacy"
else:
m[
#23 worked for me just great!
don't know if it's the right fix, but it worked in 12.04
thanks a ton!
Thank You, Vampire. This has fixed the issue until the asignnee can get the bug patched up and sent out. So, I thank you for all your help and patientce, truthfully I am best with the php and mysql languages. However I am trying to learn Java and AJAX. Peace be with you all.
The workaround described above does not seem to work for me.
gwibber-service -o displays:
Dispatcher Thread-1 : ERROR <facebook:receive> Operation failed
Dispatcher Thread-1 : INFO Loading Error: 0 - Error
@peterrus, "gwibber-service -do", not "gwibber-service -o"
Vampire's fix (Comment #16 ) works for me.
Thanks Vampire.
PS: Jedit user here. Thanks for that too. ;)
@Dac You're welcome. :-) But it is jEdit, not Jedit. ;-)
@peterrus Did you find anything new with the correct command?
I can confirm that the workaround on #16 works.
apparently, Facebook has changed the protocol. they might do it again in the future
maybe the code needs to be changed to either not care (which might not be safe?) or check the protocol before fetching messages?
no idea how that works, just spouting ideas...
The workaround on #31 worked for me in 12.04.1. Thanks!
BTW, Why a bug preventing use of Ubuntu's core app (and earlier the one preventing adding a Facebook account) as to user experience takes so long to get fixed in the flagship (most stable blah, blah...) version of the system? Isn't Gwibber meant to make Ubuntu social friendly? It's highlighted in the installer for example but then it doesn't work. Am I missing something?
This workaround doesn't explain why some users are reporting they can't add a facebook account. However the workaround is fine, I'll look at merging that into gwibber upstream ASAP.
This bug was fixed in the package gwibber - 3.6.0-0ubuntu2
---------------
gwibber (3.6.0-0ubuntu2) raring; urgency=low
* debian/
- Don't fail to refresh facebook data if privacy has no
description key (LP: #1088775)
-- Ken VanDine <email address hidden> Mon, 14 Jan 2013 11:03:01 -0500
Steps to verify this SRU:
Ensure gwibber-service has restarted since the update by either a logout/login or "killall gwibber-service", then verify there is feed data for your facebook account.
Regression potential is really low, it simply checks that the dict returned has a key, and if it doesn't use an empty value for the result.
That is a different bug, it was fixed in libaccounts-
Ok, but the problem was only with gwibber , so think it was of the group :D . Sorry for the mistake (but I've upgrade the OS at 4:30 pm) .
It works for me! :D Thank you very very much for this patch.
Only a thing , where are comments form 45 to 47 ? Can I rewrite them ?
The fix on #16 doesn't work for me on Ubuntu 12.10 64 bits:
$ gwibber-service -do
root MainThread : INFO Logger initialized
Service MainThread : INFO Service starting
Service MainThread : INFO Running from the source tree
root MainThread : ERROR Could not find any typelib for Unity
/usr/lib/
import gobject._gobject
Twitter MainThread : DEBUG Initializing.
Facebook MainThread : DEBUG Initializing.
Service MainThread : DEBUG Setting up monitors
Storage MainThread : DEBUG Creating indexes
Dispatcher MainThread : INFO Found account 6/facebook-
Dispatcher MainThread : INFO Found account 8/twitter-microblog
Dispatcher MainThread : INFO Found 2 accounts
Dispatcher MainThread : DEBUG Refresh interval is set to 15
Dispatcher MainThread : DEBUG ** Starting Refresh - 2013-01-16 14:47:52.977111 **
Dispatcher MainThread : INFO Running Jobs: 0
Dispatcher MainThread : INFO Running Jobs: 0
Dispatcher Thread-1 : DEBUG <facebook:receive> Performing operation
Facebook Thread-1 : DEBUG Logging in
Dispatcher Thread-2 : DEBUG <twitter:receive> Performing operation
Dispatcher Thread-3 : DEBUG <twitter:responses> Performing operation
Dispatcher Thread-4 : DEBUG <twitter:private> Performing operation
Twitter Thread-2 : DEBUG Logging in
Dispatcher Thread-5 : DEBUG <twitter:lists> Performing operation
Dispatcher MainThread : INFO Running Jobs: 5
Facebook MainThread : DEBUG Login completed
Twitter MainThread : DEBUG Login completed
Twitter Thread-2 : DEBUG User id is: 40375527, name is J1Sm
Dispatcher Thread-1 : ERROR <facebook:receive> Operation failed
Dispatcher Thread-1 : DEBUG Traceback:
Traceback (most recent call last):
File "/usr/lib/
message_data = PROTOCOLS[
File "/usr/share/
return getattr(self, opname)(**args)
File "/usr/share/
data = self._get(
File "/usr/share/
if "access_token" not in self.account and not self._login():
File "/usr/share/
self.
File "/usr/share/
logger.
File "/usr/lib/
return self._dict[key]
KeyError: 'uid'
Dispatcher Thread-1 : INFO Loading Error: 0 - Error
Dispatcher Thread-1 : DEBUG <facebook:receive> Finished operation (0:00:00.722634)
Dispatcher Thread-3 : INFO Loading complete: 1 - 0
Dispatche...
@Simon this is a completely different error and thus Bug. That's why I provided a way to verify it is the issue discussed here. ;-)
Fix works for me on Ubuntu Raring 13.04 updated.
Facebook status are finally back into Gwibber.
Other than text ones are still empty (example action: became friend
with, or link / video share, only name and profil picture of author is
displayed, message is blank) but I think it's a different bug.
Hi! Thanks for the uploaded fix to quantal-proposed. This bug is lacking the necessary documentation for the fix to be verified once accepted. Please update the description according to https:/
Whoops, I put that in a comment not the description, sorry about that. I've updated the description now.
Was a fix released for 12.04 lts?
This has never worked for me once I think during (natty) but never since so not expecting it to ever work again but here is my error after running gwibber-service -do
Traceback (most recent call last):
File "/usr/bin/
from gwibber.
File "/usr/lib/
from gwibber.
File "/usr/lib/
import Image
ImportError: No module named Image
Ubuntu 13.04 64 Bit Gwibber 3.6
Same as above just Facebook is blank.... 3 weeks trying every fix on the internet i can find with no success
If baffles why Ubuntu persists to put this into it's distribution considering it's by far the most buggiest software available. I have never experienced such terrible software in my life. Any explanations as to why this is still installed by default in Ubuntu cause all it does is cause a headache if I was new to Ubuntu and had this headache trying to setup Facebook I would have left long ago based fully on the headache gwibber is on people. My advice to anyone wanting to use this software is don't waste of time and effort.
Re #59. The trace is saying that package python-imaging is not correctly installed. python-
I re-installed Ubuntu and it's working now.... only thing I have done different is didn't update the kernel and all seems fine. I may update the kernel and if becomes corrupt again will try what you said and get back to here with the results.
Upgraded all my packages and bam it's not refreshing anymore. 1 of the packages due to be upgraded was python-imaging 1.7.7.4 (not sure if that's the exact package version) to python-imaging 1.7.7.8 ...... Can I downgrade to 1.7.7.4 again?
Uninstalled the upgraded package python-imaging version 1.1.7+1.7.8-1 and installed the previous version python-imaging 1.1.7-4 and it's now working again. Hope this gets fixed soon.
This still affects 12.10. Fresh install of 64-bit, all packages up-to-date. No facebook feed.
I installed 12.10 yesterday on one machine and after the updates there is no stream on facebook with gwibber. Did the fixes go to main update or are they on proposed?
If you look at the statuses carefully at the top, fix released (when there are tasks for Precise and Quantal) means that a fix is released for the developing version (Raring), but not yet for the two latest stable releases.
The python task says that it is also broken in Raring because of another issue. The result is that there is no working releases for this issue. It is possible to install an older Python package in Raring to get it working (at least in theory).
A fix was released for Raring. Is it working properly now in Raring with all updates installed or daily image?
Hello Kévin, or anyone else affected,
Accepted gwib to everyone! The released fix does work (on quantal at least... didn't test it on other versions...).
According to
http://
Facebook offers several login flows for different devices and projects. Each of these flows use the OAuth 2.0 standard.
If they switched to oauth2, since when does GWIBBER support that ?
How can I check, if my version / distro does already ?
I've seen another discussion of using oath / oath2 caused problems, as using different lib files.
Has anyone checked if this could be relevant here ?
--
Best,
Thomas
Hello Kévin, or anyone else affected,
Accepted gwib:/
After applying the update facebook news reappear. However, the authentication button in account settings does not disappear as it does for Twitter. I am not sure if this is supposed to be this way but it does not look like it was.
Private messages also do not show up.
#74 The Account setup and authentication is as it is supposed to be.
#75 When applied the proposed update does work, but you are correct that no private/direct messages appear via gwibber. This probably should be filed as a seperate bug.
The proposed update on precise does correct the reported issue of feeds not refreshing.
The proposed update on quantal does correct the reported issue of feeds notwibber - 3.6.0-0ubuntu1.1
---------------
gwibber (3.6.0-0ubuntu1.1) quantal-proposed; urgency=low
* debian/
- Don't fail to refresh facebook data if privacy has no
description key (LP: #1088775)
-- Ken VanDine <email address hidden> Mon, 14 Jan 2013 11:01:32 -0500
This bug was fixed in the package gwibber - 3.4.2-0ubuntu2.2
---------------
gwibber (3.4.2-0ubuntu2.2) precise-proposed; urgency=low
* debian/
- Don't fail to refresh facebook data if privacy has no
description key (LP: #1088775)
-- Ken VanDine <email address hidden> Mon, 14 Jan 2013 10:49:15 -0500
That no private/direct messages appear via gwibber is probably filed as a seperate bug ?
I have been thinking, that this wasn't intended at all ?:-|
Right from the beginning of using ubuntu I found gwibber in combination with empathy working the way, that the direct messages appeared in empathy, the "posts" in gwibber ?:-|
Wrong idea ?
There are problems again with gwibber and facebook. The error this time is that there is no "count" key when receiving the facebook feed:
Traceback (most recent call last):
File "/usr/lib/
message_data = PROTOCOLS[
File "/usr/share/
return getattr(self, opname)(**args)
File "/usr/share/
return [self._
File "/usr/share/
m["
KeyError: 'count'
My solution was to edit the file /usr/share/
Ruben Rocha #84
That works for me: Ubuntu 12.04.3 64 bits
Thank you!!
Having the same issue. However, I also have my twitter account setup and it refreshes just fine. Perhaps that is why mine writes refreshing... at the bottom of the gwibber window when I hit refresh. Please get somebody on this problem. I can post status messages to both facebook and twitter it just wont retrieve the facebook feeds for me. | https://bugs.launchpad.net/ubuntu/+source/gwibber/+bug/1088775 | CC-MAIN-2015-14 | refinedweb | 3,682 | 64.61 |
lp:ubuntu/trusty/cdrdao
- Get this branch:
- bzr branch lp:ubuntu/trusty/cdrdao
Branch information
- Owner:
- Ubuntu branches
- Status:
- Mature
Recent revisions
- 13. By Markus Koschany on 2013-05-12
* QA upload.
* Do not build the gcdmaster binary package anymore because it depends on the
obsolete libgnomeuimm library which is going to be removed from Debian.
(Closes: #707861)
* debian/control:
- Drop libgnomeuimm-
2.6-dev and libgtkmm-2.4-dev from Build-Depends, they
are going to be removed from Debian.
- Add libgconf2-dev and dh-autoreconf to Build-Depends and remove
autotools-dev.
- Make libperl4-
corelibs- perl | perl (<< 5.12.3-7) a recommendation instead
of a full dependency. This is acceptable because the example perl scripts
are not required by cdrdao to function properly. (Closes: #699320)
* Override lintian warning script-
uses-perl4- libs-without- dep.
* debian/rules:
- Build with --autoreconf instead with --autotools_dev to recreate the
whole build system during build time.
- Disable lame, scglib and xdao support explicitly.
- Override dh_auto_install and change the destination directory to
debian/tmp thus no further changes are required for the existing install
files.
- 12. By Markus Koschany on 2013-01-29
* QA upload.
* Set Maintainer to Debian QA Group.
* Bump compat level to 9 and use debhelper >=9 for automatic hardening build
flags.
* Bump Standards-Version to 3.9.4, no changes required.
* cdrdao-binary: Add missing dependency on
libperl4-
corelibs- perl | perl (<< 5.12.3-7). (Closes: #658944)
* Update debian/copyright to copyright format 1.0.
* Add missing dep3 header to 15-kfreebsd-
gnu.patch.
* debian/patches:
- Add 18-create-
valid-desktop- file.patch and remove deprecated UTF-8
Encoding entry, fix outdated categories and Icon entry.
- Add 19-fix-
format- not-a-string- literal- error.patch which prevents a FTBFS.
- Add 20-fix-
spelling- and-hypen- used-as- minus.patch which corrects errors of
the same name in cdrdao's and gcdmaster's manpage.
* debian/rules:
- Use dh sequencer to simplify debian/rules and to provide recommended
build-arch and build-indep targets.
- Build with --parallel.
- Use --with autotools_dev addon to provide an up-to-date config.sub and
config.guess file and do not remove these files in the clean target.
- Build with --Wl, --as-needed to avoid unnecessary dependencies on
gcdmaster.
- Enable all hardening build flags with hardening=+all.
* Add watch file. Thanks to Bart Martens.
- 11. By Robert Millan on 2012-04-08
* Non-maintainer upload.
* 15-kfreebsd-
gnu.patch: Fix FTBFS on GNU/kFreeBSD, based on patch from
Christoph Egger. (Closes: #644643)
* 16-gcdmaster-
segfault. patch: Fix segfault in gcdmaster, thanks
Adrian Knoth. (Closes: #590647)
* 17-cd-text-
hldtst. patch: CD-TEXT support for "HL-DT-ST" "DVDRAM GSA-H42L",
thanks Kees Cook. (Closes: #533097)
- 10. By gregor herrmann on 2011-10-05
* Non-maintainer upload.
* Fix "FTBFS: ScsiIf-
linux.cc: 287:37: error: no matching function for
call to 'stat::stat(const char [22], stat*)'": add patch 14-stat.h.patch
(include sys/stat.h) by brian m. carlson.
Closes: #625005
- 9. By Josselin Mouette <email address hidden> on 2010-07-04
* Non-maintainer upload.
* New upstream release. Closes: #580873.
* Switch to 3.0 source format to use the pristing upstream tar.bz2.
* 01-setuid.patch, 03-manpage.patch, 05-excl.patch, 07-cdtext.patch,
09-gcc-3.4.patch, SigC_namespace: dropped, merged upstream.
* 02-conffile.patch: regenerated to apply cleanly.
* 04-device.patch: dropped, obsolete.
* 06-tocparser.patch: disabled, looks obsolete.
* 08-dlopen-
lame.patch: disabled, we don’t distribute toc2mp3.
* 10-rules-
armel.patch, 11-rules- mips.patch, 12-rules- s390.patch,
13-rename-
functions. patch: disabled, the new version doesn’t use
this schilyware.
* Use upstream-provided manual pages.
* Install GConf schemas.
* Stop running stuff in scsilib/, it’s not used anymore.
- 8. By Adam D. Barratt on 2010-04-27
* Non-maintainer upload.
* Fix FTBFS with sigc++ 2.2.4.2. Thanks to Steve Langasek for the
patch. (Closes: #569396)
- 7. By Christoph Egger on 2009-12-08
* Non-maintainer upload.
* Import patch by Stefan Potyra to build with new eglibc (Closes:
#549399)
- 6. By Christian Hübschi on 2009-08-26
[ Daniel Baumann ]
* Replacing obsolete dh_clean -k with dh_prep.
* Using correct rfc-2822 date formats in changelog.
* Updating package to standards version 3.8.2.
* Using quilt rather than dpatch.
* Using common name for local manpages directory.
* Updating year in copyright file.
* Updating rules file to current state of the art.
* Updating package to standards version 3.8.3.
* Sorting and wrapping depends.
* Removing vcs fields.
[ Christian Hübschi ]
* New maintainer.
- 5. By Daniel Baumann on 2008-09-29
* Updating vcs fields in control file.
* Using patch-stamp rather than patch in rules file.
- 4. By Daniel Baumann on 2008-06-20
* Updating cross build handling in rules.
* Change clean target definition in rules (Closes: #450752, #471224).
* Updating to standards 3.8.0.
Branch metadata
- Branch format:
- Branch format 7
- Repository format:
- Bazaar repository format 2a (needs bzr 1.16 or later)
- Stacked on:
- lp:ubuntu/utopic/cdrdao | https://code.launchpad.net/~ubuntu-branches/ubuntu/trusty/cdrdao/trusty | CC-MAIN-2018-13 | refinedweb | 827 | 55.5 |
#include <stdio.h> #define size 15 int main(void) { int high, medium, low, i; float fahren[15], celcius[15], sum, averg; char grad[16]; FILE *inp, *outp; inp = fopen("input.txt", "r"); outp = fopen("output.txt", "w"); for (i=0;i>size;i++) {fscanf(inp,"%f", &fahren[i]);} for (i=0;i>size;i++) { celcius[i] = 5/9 * (fahren[i] - 32); sum += celcius[i];} averg = sum/size; for (i=0;i>size;i++){ if (celcius[i] >= 35){grad[i] = 'H'; ++high;} else if (celcius[i] < 35 && celcius[i] >= 20){grad[i] = 'M'; ++medium;} else {grad[i] = 'L'; ++low;}} fprintf(outp, "Average of the temperature : %.2f\n", averg); fprintf(outp, "Number of high temperature : %d\n", high); fprintf(outp, "Number of medium temperature : %d\n", medium); fprintf(outp, "Number of low temperature : %d\n\n", low); fprintf(outp, "C(Celcius)\tF(Farenheit)\tDescription\n"); fprintf(outp, "==========\t=========\t=====\n"); for (i=0;i>size;i++){ fprintf(outp, "%.2f\t%.2f\t%c\n", celcius[i], fahren[i], grad[i]);} fclose(inp); fclose(outp); return 0; }
There are some problems I am facing here. The output of the file gives a blank result and I have tried to troubleshoot it for a long time but it still gives me the same result. I created the input.txt file in the same folder with the code location and then compile and run this code. Then I keyed in 15 temperatures into the input.txt file and save it. Then I opened the output.txt file and it gives me this:
Average of the temperature : 0.00
Number of high temperature : 2
Number of medium temperature : 40
Number of low temperature : 7417272
C(Celcius) F(Farenheit) Description
========== ========= =====
It seems that the code is having problem in reading the input. Can anyone tell me where the error is?
This post has been edited by macosxnerd101: 29 December 2012 - 12:44 PM
Reason for edit:: Please use code tags | http://www.dreamincode.net/forums/topic/305145-problem-in-file-input-and-output-in-c-programming/ | CC-MAIN-2013-20 | refinedweb | 323 | 63.29 |
This article demonstrates the using of binary formats in JavaScript code. JavaScript, by its nature, cannot operate with binary data represented as a fragment of memory – as a byte array. That makes it difficult to use community developed algorithms and encodings. A good example is the DEFLATE compressed format. This raises more problems if the JavaScript code has to be run on a web browser: data has to be delivered over HTTP.
In the proposed implementation, a byte array is emulated by a regular JavaScript array of objects. Also, the given implementation tries to solve the problem of binary data transfer to a client-side script. Let’s assume we have DEFLATE compressed data (.NET’s System.IO.Compression namespace, Java’s java.util.zip.*, PHP’s http_deflate) and there is a way to transfer it to the client in BASE64 format.
System.IO.Compression
java.util.zip.*
http_deflate
The deflate.js contains the functions and classes that implement the decompression part of the DEFLATE algorithm (RFC 1951). To use this algorithm, its input has to be presented as a stream of bytes.
// create BASE64 byte stream reader
var reader = new Base64Reader(base64string);
The class exposes the readByte() method that returns the next byte, or -1 if it’s the end of the stream.
readByte()
// create inflator
var inflator = new Inflator(reader);
The Inflator class, as in the previous class, exposes the readByte() method that returns the next byte from the decompressed byte stream. The binary stream can be consumed at that point.
Inflator
If regular text is compressed, and it needs to be re-encoded from UTF-8 bytes to characters, we use the Utf8Translator class to retrieve the characters instead of the bytes.
Utf8Translator
// create translator
var translator = new Utf8Translator(inflator);
The class exposes the readChar() method that returns a one-character string with the next available character, or null to indicate the end of the stream. The deflate.js file also contains UnicodeTranslator and DefaultTranslator.
readChar()
UnicodeTranslator
DefaultTranslator
For convenience, there is the TextReader class that exposes not only the readChar() method, but also the readToEnd() and readLine() methods.
TextReader
readToEnd()
readLine()
Those functions/classes can be used not only within the web browser's context, but in OS scripting or legacy ASP programming.
The SamplePage.htm, included in the package, displays the RFC 1951 memo content.
The deflate.js functions will help to perform selective compression of data for AJAX requests. Most of the data transmitted in AJAX operations is text or a textual presentation of the binary data.
Since not all web browsers can retrieve remote data as an array of bytes (as responseBody in IE’s XmlHTTPRequest), BASE64 encoded data has to be transmitted to the client from the server. Even if BASE64 data grows 133% for its original, compression of textual data by 75% will still reduce the amount of data to be stored/transferred.
responseBody
XmlHTTPRequest
Emulation of byte array as an array of objects in JavaScript reduces the performance of the solution, e.g., to extract 50K takes 1-2 sec(s) in a web browser context.
RFC 1951, 2779, 2781, and 4648 were used to implement the underlying algorithms. There are well written memos. There are lots of formats based on the open DEFLATE compressed format (e.g., GZIP, PNG, SVGZ, SWF); implementing it in JavaScript gives one more way to access/reuse data.
This article, along with any associated source code and files, is licensed under The MIT License
Deflater(Deflater.DEFAULT_COMPRESSION, true)
General News Suggestion Question Bug Answer Joke Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | http://www.codeproject.com/Articles/26980/Binary-Formats-in-JavaScript-Base64-Deflate-and-UT | CC-MAIN-2014-52 | refinedweb | 613 | 54.42 |
Before you start
This tutorial is geared towards developers who want to learn how to store data in XML format in a database, connect to DB2 from a Python application, and learn how to convert data from CSV files into XML documents. No prior knowledge of Python is assumed (you will learn how to install it in this tutorial), but it would be advantageous. This tutorial assumes that you use a Microsoft® Windows® operating system, but the code should work on other platforms without modification. When you complete this tutorial, you will have the skills to create powerful Python applications that can communicate and interact with an IBM DB2 database and harness the power that pureXML offers.
About this tutorial
The IBM DB2 database management system has long been a leading player in the area of relational data management. In recent years, however, there has been a significant rise in the requirement for data structures that are more flexible and document-oriented in nature. One of the more prominent examples of such data structure is XML.
While many relational database systems have rushed to incorporate some form of XML support in their database, IBM DB2 is the only such offering that allows XML to be stored natively in the database, unchanged and true to its original form. This is referred to as pureXML—a technology that allows DB2 developers and DBAs to manipulate and report on XML data alongside relational data, without negatively affecting the purity of the XML itself.
In this tutorial, you will develop a Python script that connects to the United States Census Bureau Web site and downloads a CSV file containing data about the population at a national, regional, and state-wide level—including the results of the 2000 Census and fluctuations based on estimates in each year since then. You will then learn how to process this data, converting it into an XML document. Rather than import this large document and rely on DB2 functions to slice and dice it into individual rows, you will then use Python to insert this data into DB2, with an XML document stored per each relevant row in the CSV file. Finally, you will create a command-line application that produces some useful reports on this data, showing a list of states, regions, or countries in the order of highest to lowest population.
Prerequisites
To follow the steps in this tutorial, you will need to have the following software installed:
- IBM DB2 Express-C 9.5 or later
- Python Version 2.6 or any pre-3.0 version
See Resources for the links to download these prerequisites. This tutorial assumes that you are using a Microsoft Windows operating system, preferably XP or later. In order to install Python and the IBM DB2 extension for Python, you will need administrative privileges on your computer.
Setting up the database
In this section, you will create a new IBM DB2 database using the DB2 Command Editor utility, before you create a series of tables that will store census population data in XML format. You will create three tables: country, region, and state. Each of these tables will store a unique ID for each row in the table, as well as an XML document that will house the census data that you will import from the U.S. Census Bureau's CSV files later in this tutorial.
Creating the database
Let's get started. Open the DB2 Command Editor (Start>Programs>IBM DB2>[DB2 Instance Name]>Command Line Tools) and enter the following command:
create database census using codeset UTF-8 territory US.
This process can take a minute or two to complete so be patient. When it is finished, you will receive a response message like:
DB20000I The CREATE DATABASE command completed successfully.
Tip: You can quickly execute commands in Command Editor by pressing Ctrl+Enter.
Now, connect to the newly-created census database using the following:
connect to census.
Once again, you should receive a response from the DB2 server, this time something along the lines of:
A JDBC connection to the target has succeeded.
The database is now created and you are ready to create the tables that will store the application's data.
Creating the database tables
You will load the population data into the database and store it in three separate tables: country, region, and state. Let's create these tables now in Listing 1.
Listing 1. DDL SQL statements to create tables
create table country ( id int not null generated by default as identity, data xml not null, primary key(id) ); create table region ( id int not null generated by default as identity, data xml not null, primary key(id) ); create table state ( id int not null generated by default as identity, data xml not null, primary key(id) );
Each of these tables store the same type of data—a unique identifier that will be automatically generated by DB2 each time a row is inserted, and an XML data column that will store an XML document for each row. Strictly speaking, you can use a single table here and create a type column on it to determine whether a row is a country, region, or state, but if you separate them into tables it you have more flexibility for future manipulation.
When you execute the above SQL statements, DB2 should return the following response for each table:
DB20000I. The SQL command completed successfully.
With the database configured, you are now ready to install and configure Python and the ibm_db extension for Python.
Installing and configuring Python
Python is a high-level programming language that places a strong focus on the readability of code. Unlike many other programming languages, where code indentation and style is at the discretion of the developer, in Python you must use indentation to denote blocks of code (such as classes, if statements, and loops). Python is easy to learn, produces elegant and clean code, and is widely supported on a host of different platforms, making it an excellent choice for any number of different application development projects.
About Python
Although Python is generally pre-installed on Mac OS X and Linux® operating systems, the same cannot be said for Microsoft Windows. Fortunately, you can download Python from the Web and install it on Windows—and you will learn how to do so in this section. Before you start, however, it's worth mentioning that you have a number of options when it comes to downloading Python for Windows.
The first option is to use the open-source official binary installer, available for download from the official Python Web site. This option offers the most up-to-date version of Python and is provided on an open-source license. In this tutorial, you will work with this distribution of Python.
Alternatively, the commercial ActiveState Python offers some additional resources such as complete documentation, and additional Python extensions including Windows-specific extensions that facilitate the development of Win32 API-based applications using Python.
Installing Python
The first step in installing Python is to download it from the official Python Web site (see Resources for a link). At the time of writing, the current production versions of Python are 2.6.4 and 3.1.1. This tutorial assumes you are using the 2.6.* version of Python. As version 3.0 and above is not backward-compatible, I highly recommend that you download the latest pre-3.0 version (version 2.x.x) offered for download. Save this file to your hard drive and when it has finished downloading, open the .msi file to launch the setup program.
When the installer launches, it will ask you if you want to install it for all users or just for you (this option is not available on Windows Vista®). Leave the default selection, Install for all users, and press Next to continue. You will now be asked to select a destination directory. The default should be C:\Python26\ or similar; again, accept this default and press Next to move forward. You will now be offered the opportunity to customize your Python installation by selecting which features you want to be installed. By default, everything is selected, so leave this as is and press Next to start the installation. The process will take a couple of minutes to complete, and when it is finished, you will see a window like the one in Figure 1.
Figure 1. Completing the Python 2.6.4 Installer window
Press Finish to exit the setup application. Before you move on, it's worth verifying that Python is installed and working correctly. You can use the shortcuts that were added to the Windows Start Menu if you wish, but I recommend that you launch Python from the command prompt as this is how you will run the scripts you create later in this tutorial.
First, open the Windows command prompt window through the Run dialog
(Start>Run, then enter
cmd) or navigate to Start>Programs>Accessories>Command Prompt). At the prompt, enter the command:
python.
You should now be at the Python prompt, indicated by >>>, as in Figure 2. (See a text-only view of Figure 2.)
Figure 2. Python prompt
Note: If you see a message such as
python is not recognized as an
internal or external command, operable program or batch file, the Python directory
was not placed on your Windows Path. See Resources for information on how to set this up. To quit the Python prompt, enter the following command:
quit().
You should return to the Windows command prompt after entering this command at the Python prompt. In the next section, you will learn how to install the ibm_db Python extension, which will allow you to connect to a DB2 database from Python.
Installing the ibm_db Python extension
The ibm_db extension for Python allows you to connect to and interact with an IBM DB2 database using Python code. To install this extension, you will first need to install the easy_install utility (setuptools). Navigate to the setuptools package page (see Resources and find the file for your version of Python, which is 2.6 in my case). Download this file to your hard drive, and when it has completed, open it to install the easy_install.exe application into your Python Scripts directory (usually C:\Python26\Scripts).
Installing the ibm_db extension itself is very simple. Open a Windows command prompt window (Start>Run>cmd) and enter the following command, and if you installed Python to a different directory, change the reference accordingly:
C:\Python26\Scripts\easy_install ibm_db.
This will search for, download, extract, and install the ibm_db extension automatically. When it is finished, you will be returned to the Windows command prompt, as in Figure 3. (See a text-only view of Figure 3.)
Figure 3. Successfully installed ibm_db extension
Next, you will verify that the ibm_db extension is working correctly by testing the connection to the DB2 database you created earlier in this tutorial.
Connecting to DB2 from Python
With the DB2 database created and Python and the ibm_db extension installed and configured, you are now ready to check that you can connect to DB2 from Python. Open a Windows command prompt window and issue the python command to launch the Python interpreter.
At the prompt, enter the following commands to connect to DB2 and count the number of rows in the country table. Please note that the Python prompt (>>> and ...) has been included here for illustrative purposes only and you should not type those in to the interpreter. Also, be sure to replace the credentials in the code in Listing 2 with your actual DB2 credentials.
Listing 2. Python code to connect to DB2
>>> import ibm_db >>> conn = ibm_db.connect("DATABASE=census;HOSTNAME=localhost;PORT=50000; PROTOCOL=TCPIP;UID=username;PWD=password;", "", "") >>>>> stmt = ibm_db.exec_immediate(conn, sql) >>> result = ibm_db.fetch_both(stmt) >>> while result != False: ... print "Count: ", result[0] ... result = ibm_db.fetch_both(stmt) ...
After you enter the final line above, press enter and the code will execute. You should see a result (
Count: 0) as in Figure 4.
Figure 4. Result of DB2 connection test
If you have problems connecting to DB2 from Python, check that the ibm_db extension was installed correctly, and that you have already created the database as described earlier in this tutorial. Also verify that your credentials for connecting to DB2 are correct.
With the database set up and Python ready to get to work, you are now ready to start developing the main subject of this tutorial. In the next section, you will download, parse, and convert CSV data from the U.S. Census Bureau and save it as XML data in the DB2 database. You will then learn how to read this data from the database and display it to the user.
Downloading and converting the CSV data
In this section of the tutorial, you will learn how to create a Python script to pull a CSV file down from the United States Census Bureau Web site. Next, you will process this CSV data and convert it to XML that can be stored in the DB2 pureXML database you created earlier in the tutorial.
Before you start, you should create a folder somewhere on your hard disk where you will store the project files. I stored my data in a folder C:\pycensus, and I suggest that you do the same.
Downloading CSV files from the US Census Bureau Web site
The United States Census Bureau has a plethora of data available for download, in a variety of different formats. Unfortunately, the population data from Census 2000 and estimates for each year since then is only available in CSV format and not XML. That's not a problem, however, as you can use Python to pull this CSV file down from the Census Bureau Web site and convert it into XML that you can store in your DB2 pureXML database.
If you wanted, you can point your Web browser to the URL for the CSV file project folder. Instead, however, you will create a Python script to do this task. In your favorite text editor, create a new file and save it as download.py in your project folder (for example, C:\pycensus). Add the code from Listing 3 to this file.
Listing 3. download.py
import httplib conn = httplib.HTTPConnection("") conn.request("GET", "/popest/national/files/NST-EST2008-alldata.csv") response = conn.getresponse() f = open('data.csv', 'w') f.write(response.read()) f.close() conn.close()
In this script, you use the
httplib module to connect to the census.gov Web site and issue a
GET request for the CSV file required. Then you fetch the response and write it to a file named data.csv. To run this script, open up the Windows Command Prompt and change to the project directory as follows:
cd \pycensus.
Next, run the following command to run the Python script:
python download.py.
When the script has completed you will return to the prompt. You might wonder why there were no messages produced—don't worry, this is a good thing as it means no errors occurred. Open your project folder in Windows Explorer and you will now notice an extra file in the folder named data.csv. If you have Microsoft Excel®, this will be the default handler for this type of file, and opening it will produce a result like the one in Figure 5.
Figure 5. data.csv in Microsoft Excel
Warning: Do NOT save this file in Excel, as it may change the CSV file format to suit its own interpretation, and this might not be readable by Python. If Excel asks you to save the file, choose No. If you accidentally save the file, simply delete it and re-run the download.py Python script. In the next section, you will learn how to take this CSV file and convert it into XML.
Converting CSV data into XML documents
To convert the CSV data into XML, you must first be clear on how exactly you wish to store the data, whether different records should be stored differently, and check if some records should be discarded. In the example of the CSV file you just downloaded, you will notice that this contains three types of data: a single row of information for the entire country; four rows of data for the regions Northeast, Midwest, South, and West; fifty one rows of data for the fifty states of the USA plus District of Columbia; and a row for the Puerto Rico Commonwealth. The first row of the file is a header row that is to be used for column names.
The script you create in this section will take the header row and use this data to form the tag names for each element that a record should have in the XML document. The script will determine, based on the first four columns, whether the particular row refers to a country, region, or state, and will set the tag name accordingly to indicate what the XML document refers to. Finally, the script will choose to exclude the Puerto Rico Commonwealth record as it has some incomplete data.
In your text editor, create a new file and save it as convert.py. Add the code from Listing 4 to this file.
Listing 4. convert.py
import csv reader = csv.reader(open('data.csv'), delimiter=',', quoting=csv.QUOTE_NONE) print "<data>" print xml print "</data>"
In this file, you use the csv library to read the data.csv file. You wrap the output in an opening
<data> and closing
</data> XML tag, as it is producing a single document output. Then you loop through each line of the CSV file. If the current line is the first line of the file, you set that record as the header. This will be used later in the script as the element name for each field in a country, region, or state record. If the current line is not the header record, you loop through each column in the record, and create an inner XML element string whose name is driven from the heading record. You then check whether the row in question is referring to a country, region, or state, and wrap the inner XML element in an outer tag
<country>,
<region>, or
<state> accordingly. Finally, you check if the record contains an
X in a specific field, and if so, set a Boolean indicator to
True that will stop that particular row from being added to the XML document.
The first way you can run this script is the same as before, by issuing:
python convert.py.
Running the script this way will produce a result like the one in Figure 6.
Figure 6. convert.py output
As you can see, the script has put the data directly to the screen. It would be far more useful if this data were saved to a file. Rather than creating more Python code to do this, you can simply change the command you issue as follows to tell the command prompt to save the output to a file named data.xml:
python convert.py > data.xml.
This will create a new file in the project directory named data.xml. If you open this file in an application that reads and formats XML, such as Firefox, you might see a result like the one in Figure 7.
Figure 7. XML output in Mozilla Firefox
With the data stored in a file like this, you can import the XML into a DB2 database
using a .del file and the
IMPORT command. However, this
results in the entire XML data being stored in a single row in a DB2 table. Now,
it is possible to use XQuery to split up this data and store it into separate rows. But, as you are already using Python to create the XML document, it is much easier to simply perform a series of
INSERT statements directly in the convert.py script itself. In the next section, you will modify the convert.py script to do just that.
Saving XML into DB2 with Python
Previously, you learned how to format the CSV data you downloaded from the U.S. Census Bureau into a large XML document. Now you will learn how to take the rows for country, regions, and states and insert them into a DB2 database. Make the changes listed in this section to the convert.py file you created in the previous section.
Including the ibm_db library
The first thing that you need to do is include the ibm_db library in your code. To do this, change the first line of the covert.py file so that it now reads:
import csv, ibm_db.
With a script like this, running it multiple times causes every row to be inserted
repeatedly, resulting in a mass of duplicate data. To prevent that, clear the database tables at the start of the script so that each time it runs it will start from fresh. Add Listing 5 just below the import statement from Listing 4, which you just modified (in other words add it before the
reader = csv.reader... line in Listing 4).
Listing 5. Excerpt from convert.py—clearing down tables
connString = "DATABASE=census;HOSTNAME=localhost;PORT=50000;PROTOCOL=TCPIP; UID=username;PWD=password;"."
You might remember the code to connect to the DB2 database from an earlier section of this tutorial, where you tested that the Python connection to DB2 was working. This time, you are performing three SQL statements—deleting all data from the country, region, and state tables, respectively. In each case, Python will output a message either confirming that the statement executed successfully or that an error occurred. If an error does occur, the DB2 error message is relayed to the user, making it easier to debug what went wrong.
Next, you need to delete a couple of print statements that output the outer XML declaration for the single large document you created in the previous section. These lines are:
print "<data>" and
print "</data>".
The former should be just beneath the
reader = csv.reader... line, and the latter should be the last line of the file.
Finally, you need to change the convert.py file so that it doesn't print the XML code for each row, but rather that it saves this as an XML document in the appropriate DB2 table. You have already created code to determine if a particular line is a country, region, or state, and to generate the XML for the row; so all you need to do now is create the relevant
INSERT statement and execute it.
Find the line that currently reads
print xml. You need to
replace this line with the code from Listing 6. Keep in mind that Python is very sensitive to code indentation, so be sure to line up your code correctly in your text editor.
Listing 6. Excerpt from convert.py—saving rows to the DB2 database
The final code for convert.py will look like Listing 7. Again, indentation is hugely significant in Python, so ensure that it is correct or you might experience unexpected results.
Listing 7. convert.py
import csv, ibm_db connString = "DATABASE=census;HOSTNAME=localhost;PORT=50000;PROTOCOL=TCPIP;UID=jjlennon; PWD=DopGX240;"." reader = csv.reader(open('data.csv'), delimiter=',', quoting=csv.QUOTE_NONE)
Make sure you have saved this file and open a Windows command prompt. Change to the project directory and run the convert.py script again, this time using the following command (don't pipe the output to a file):
python convert.py.
You should see a number of "Row added to state table" messages appearing one after the other, as in Figure 8.
Figure 8. Output from revised convert.py
Before you read this data from DB2 using Python, open the DB2 Command Editor and check how this data looks in the database. Make sure you are connected to the census database (issue the command
connect to census if required) and enter the following SQL statement:
select * from state. This query should produce 51 results, as in Figure 9.
Figure 9. Query Results view
Click on the more (...) button next to any one of the rows in the Query Results tab. This will open the XML Document Viewer, showing that particular row's associated XML document. This should look similar to the screen capture in Figure 10.
Figure 10. XML document viewer
Feel free to execute similar SQL statements to retrieve records from the country and region tables; you should get a single row result for the country table and four rows for the region table.
Next, you will learn how to read this data from DB2 into Python and present it to the user.
Reading XML from DB with Python
In this section, you will learn how to build a command-line Python application that will request user input to select one of three menu options. These options will allow the user to view a list of states, regions, or countries ordered by the population driven from the 2000 census.
To start, you'll connect to the DB2 database, print the list of menu options, and request the user's input. Create a new file called read.py and add the code from Listing 8 to it.
Listing 8. Excerpt from read.py—getting started
import ibm_db, locale, sys locale.setlocale(locale.LC_ALL, '') connString = "DATABASE=census;HOSTNAME=localhost;PORT=50000; PROTOCOL=TCPIP;UID=username;PWD=password;" try: conn = ibm_db.connect(connString, "", "") except: print "Could not connect to DB2: ", ibm_db.conn_errormsg() else: print "Connected to DB2." print "To view population information, please select one of the following options:" print "1.) List of states by population" print "2.) List of regions by population" print "3.) List of countries by population" print "4.) Exit the application" input = False while input == False: try: option = int(raw_input("Please enter a number from the options above to view that information: ")) if option not in [1,2,3,4]: raise IOError('That is not a valid option!') except: print "That is an invalid option." else: input = True
In Listing 8, you are first importing the ibm_db and locale libraries. The locale library is required to format the population number so that it is more readable (using thousand separators). You begin the application by setting the locale to the default setting on your machine. Next, you connect to the DB2 database, before printing information about the different menu options that will be available to the user.
The final section of code in Listing 8 requests that the user enter a value, and verifies that this is an integer, and is one of the four available options—1, 2, 3 or 4. If the value provided is not one of these values, it will keep asking for a value until a valid one is entered. The user can exit the program at any time by selecting option 4.
Now that the application has determined what data the user wants to see, it must build an appropriate SQL statement to retrieve this data. The code in Listing 9 does just that.
Listing 9. Excerpt from read.py—building the SQL
selected = "" if option == 1: sql = "select x.* from state s, xmltable('$d/state' passing s.data as \"d\"\ columns \ name varchar(50) path 'name', \ pop int path 'census2000pop') as x \ order by x.pop desc" selected = "state" elif option == 2: sql = "select x.* from region r, xmltable('$d/region' passing r.data as \"d\"\ columns \ name varchar(50) path 'name', \ pop int path 'census2000pop') as x \ order by x.pop desc" selected = "region" elif option == 3: sql = "select x.* from country c, xmltable('$d/country' passing c.data as \"d\"\ columns \ name varchar(50) path 'name', \ pop int path 'census2000pop') as x \ order by x.pop desc" selected = "country" elif option == 4: sys.exit()
In Listing 9, the if block checks whether the user's input selection was the value 1, 2, 3 or 4. If it detects a value between 1 and 3, it will create an SQL statement for viewing population data for states, regions, or countries. If it detects that 4 was entered, it will exit the program.
The SQL statement for each option is virtually the same, except that it looks at a different table in each instance. It basically uses the
XMLTABLE function to map XML elements in the data column of the table to different relational-style columns. It then orders the data by the population value, from highest number to lowest number.
The final part of the application is executing the SQL statement and looping through the result set to produce a table of results. Listing 10 shows this code.
Listing 10. Excerpt from read.py—Formatting results
try: stmt = ibm_db.exec_immediate(conn, sql) except: print "Error retrieving data: ", ibm_db.stmt_errormsg() else: res = ibm_db.fetch_both(stmt) print ".----------------------------------------------," print "| |" print "|", ("%s LIST BY POPULATION" % selected.upper()).center(44), "|" print "| |" print "|----------------------------------------------|" print "|", ("%s" % selected.upper()).center(21), " | ", "POPULATION".center(18), "|" print "|----------------------------------------------|" while res != False: print "|", res[0].ljust(21), " | ", locale.format("%d", res[1], grouping=True) .rjust(18), "|" res = ibm_db.fetch_both(stmt) print "'----------------------------------------------'"
In this code, you execute the SQL statement that was generated by the code in Listing 9, and print out a table that nicely formats the results. You are using a series of Python functions in this section that perform string manipulation such as left-justifying, centering, and right-justifying text and formatting the population value with thousand separators to make it easy to read.
With the code for reading from the database complete, you are now ready to execute the script. From the Windows Command Prompt, make sure that you are in the project directory and use the following command to start the program:
python read.py.
When the program executes, it will connect to DB2 and present you with the following list of menu options, which you can enter in the application:
- List of states by population
- List of regions by population
- List of countries by population
- Exit the application
Figure 11 shows the menu options. (See a text-only view of Figure 11.)
Figure 11. Application menu
Try entering an invalid menu option, such as the string
hello. You will receive an error as in Figure 12,
before you are asked to enter an option again. (See a text-only view of Figure 12.)
Figure 12. Invalid menu option error
This time, enter a valid option. I selected option 2 (List of regions by population). This should produce a result as in Figure 13. (See a text-only view of Figure 13.)
Figure 13. Regional population data
As you can see, the application presents a table with a list of regions, with the region with the largest population displayed first. You should see a similar result for the other two menu options, except that option 1 will display 51 states and option 3 will display just the one country.
Be sure to try out the different menu options for yourself, and try to enhance the application by adding some more options and different views of the data.
Summary
In this tutorial, you learned how to create a DB2 database that features tables with native XML data columns. You then learned how to install Python and the ibm_db extensions for Python through the easy_install utility. Next, you verified that you can communicate with the DB2 database from the Python interpreter. You then developed a Python script that pulled down population data from the U.S. Census Bureau Web site, before you converted this CSV data into XML format and saved it in DB2 tables. Finally, you created a basic command-line application that provides tabular reports about national, regional, and state-wide population data.
With the information provided in this tutorial, you should have the knowledge required to further your Python and DB2 development skills.
Download
Resources
Learn
- Create an alerts system using XMPP, SMS, pureXML, and PHP (Joe Lennon, developerWorks, November 2009): Follow along in this tutorial to import an XML file with Euro foreign exchange rates into an IBM DB2 database and use special XQuery and SQL/XML functions to split this XML into separate database rows.
-.
- Getting Started with IBM DB2 Express-C: Visit the home page for DB2 Express-C.
- The Python Tutorial: Start learning Python.
- The Charming Python column (David Mertz, developerWorks, June 2000 - November 2009): Read the advanced articles in this series, some of the many developerWorks articles about Python.
- How do I run a Python program under Windows?: Find information on how to place the Python directory on your Windows Path.
- Guide: Read about the pureXML data store, hybrid database design, and administration in this IBM Redbook.
-
- Python: Download and get started with this powerful programming language. Version 2.6 is used in this tutorial.
- easy_install utility (setuptools): Install this utility and the ibm_db extension for Python so you can connect to and interact with an IBM DB2 database using Python code.
- Census data, CSV format: Get the census data from the census.gov Web site and build the application in this tutorial.
-. | http://www.ibm.com/developerworks/xml/tutorials/x-csvdb2pytut/index.html | CC-MAIN-2014-52 | refinedweb | 5,535 | 62.78 |
Merck & Co Inc (Symbol: MRK). So this week we highlight one interesting put contract, and one interesting call contract, from the January 2018 expiration for MRK. The put contract our YieldBoost algorithm identified as particularly interesting, is at the $57.5 strike, which has a bid at the time of this writing of $1.42. Collecting that bid as the premium represents a 2.5% return against the $57.5 commitment, or a 4.2% annualized rate of return (at Stock Options Channel we call this the YieldBoost ).
Turning to the other side of the option chain, we highlight one call contract of particular interest for the January 2018 expiration, for shareholders of Merck & Co Inc (Symbol: MRK) looking to boost their income beyond the stock's 3% annualized dividend yield. Selling the covered call at the $67.5 strike and collecting the premium based on the $1.63 bid, annualizes to an additional 4.4% rate of return against the current stock price (this is what we at Stock Options Channel refer to as the YieldBoost ), for a total of 7.3% annualized rate in the scenario where the stock is not called away. Any upside above $67.5 would be lost if the stock rises there and is called away, but MRK shares would have to climb 6% from current levels for that to happen, meaning that in the scenario where the stock is called, the shareholder has earned a 8.5% return from this trading level, in addition to any dividends collected before the stock was called.
Top YieldBoost MR. | http://www.nasdaq.com/article/one-put-one-call-option-to-know-about-for-merck1-cm805203 | CC-MAIN-2017-43 | refinedweb | 263 | 65.83 |
This course has only one video
Scroll down to the script below, click on any sentence (including terminal blocks!) to jump to that spot in the video!Cool, got it!
Course: PHP Namespaces in 120 Seconds Tutorial
gstreamer0.10-ffmpeg
gstreamer0.10-plugins-goodpackages.
Time to master PHP 5.3 namespaces! The good news is, namespaces are easy!
To prove it, we've challenged ourselves to explain them in 120 seconds.
Let's go!
Meet
Foo. He's a PHP 5.2 class that does a lot of important things:
Foo, say hi to the listener:
Ok, so
Foo's humor is a bit old too.
Using
Foo is easy - simply
new Foo():
To keep up with the times, let's put
Foo in a brand new PHP 5.3 namespace.
A namespace is like a directory and by adding
namespace,
Foo now lives in
Acme\Tools:
To use
Foo, we have to call him by his fancy new name:
This is just like referring to a file by its absolute path.
And that's really it! Adding a namespace to a class is like organizing files from one directory, into a bunch of sub-directories. To refer to a class, use its fully-qualified name, starting with the slash. From here, it's all gravy.
Since running around with this giant name is a drag, let's add a shortcut:
The
use statement lets us call
\Acme\Tools\Foo class by a nickname.
Heck, we can call it anything, or just let it default to
Foo:
Great? But what about old-school, non-namespaced PHP classes? For that, let's
pick on
DateTime, a handy class that's core to PHP, and got some new bells
and whistles in PHP 5.3. For ever and ever, creating a new
DateTime object
looked the same:
new DateTime():
And if we're in a normal file, this still works. But in a namespaced file,
PHP thinks you're talking about a class in the
Acme\Tools namespace:
You can either refer to the class by its fully-qualified name -
\DateTime:
or add a
use statement:
Yes, the
use statement looks silly, but it tells PHP that when you say
DateTime, you mean the non-namespaced class
DateTime. Oh, and get rid of
the beginning slash with the
use statement - everything works completely
the same with or without these, but you typically don't see them:
Ok bye! | https://symfonycasts.com/screencast/php-namespaces-in-120-seconds/namespaces | CC-MAIN-2019-13 | refinedweb | 406 | 79.6 |
Flutter Navigation Tutorial
Learn about routes, navigation, and transitions for apps written using the Flutter cross-platform framework from Google.
Version
- Other, Android 4.4, Android Studio 3
What’s better than an app with one screen? Why, an app with two screens, of course! :]
Navigation is a key part of the UX for any mobile application. Due to the limited screen real estate on mobile devices, users will constantly be navigating between different screens, for example, from a list to a detail screen, from a shopping cart to a checkout screen, from a menu into a form, and many other cases. Good navigation helps your users find their way around and get a sense of the breadth of your app.
The iOS navigation experience is often built-around a UINavigationController, which uses a stack-based approach to shifting between screens. On Android, the Activity stack is the central means to shift a user between different screens. Unique transitions between screens in these stacks can help give your app a unique feel.
Just like the native SDKs, cross-platform development frameworks must provide a means for your app to switch between screens. In most cases, you’ll want the navigation approach to be consistent with what the users of each platform have come to expect.
Flutter is a cross-platform development SDK from Google that allows you to quickly create apps that target both iOS and Android from a single code base. If you’re new to Flutter, please checkout our Getting Started with Flutter tutorial to see the basics of working with Flutter.
In this tutorial, you’ll see how Flutter implements navigation between the different screens of a cross-platform app, by learning about:
- Routes and navigation
- Popping off the stack
- Returning a value from a route
- Custom navigation transitions
Getting Started
You can download the starter project for this tutorial from the materials link at the top or bottom of the page.
This tutorial will be using VS Code with the Flutter extension installed.:
Once the project is open in VS Code, hit F5 to build and run the starter project. If VS Code prompts you to choose an environment to run the app in, choose “Dart & Flutter”:
Here is the project running in the iOS Simulator:
And here it is running on an Android emulator:
The “slow mode” banner you see is due to the fact that you’re running a debug build of the app.
The starter app shows the list of members in a GitHub organization. In this tutorial, we’ll navigate from this first screen to a new screen for each member.
Second Screen
We first need to create a screen to navigate to for each member. Elements of a Flutter UI take the form of UI widgets, so we’ll create a member widget.
First, right-click on the lib folder in the project, choose New File, and create a new file named memberwidget.dart:
Add import statements and a StatefulWidget subclass named MemberWidget to the new file:
import 'package:flutter/material.dart'; import 'member.dart'; class MemberWidget extends StatefulWidget { final Member member; MemberWidget(this.member) { if (member == null) { throw new ArgumentError("member of MemberWidget cannot be null. " "Received: '$member'"); } } @override createState() => new MemberState(member); }
A
MemberWidget uses a
MemberState class for its state, and passes along a
Member object to the
MemberState. You’ve made sure that the
member argument is not-null in the widget constructor.
Add the
MemberState class above
MemberWidget in the same file:
class MemberState extends State<MemberWidget> { final Member member; MemberState(this.member); }
Here, you’ve given
MemberState a
Member property and a constructor.
Each widget must override the
build() method, so add the override to
MemberState now:
@override Widget build(BuildContext context) { return new Scaffold ( appBar: new AppBar( title: new Text(member.login), ), body: new Padding( padding: new EdgeInsets.all(16.0), child: new Image.network(member.avatarUrl) ) ); }
You’re creating a Scaffold, a material design container, which holds an AppBar and a Padding widget with a child Image for the member avatar.
With the member screen all setup, you now have somewhere to navigate to! :]
Routes
Navigation in Flutter is centered upon the idea of routes.
Routes are similar in concept to the routes that would be used in a REST API, where each route is relative to some root. The widget created by the
main() method in you app acts like the root.
One way to use routes is with the PageRoute class. Since you’re working with a Flutter MaterialApp, you’ll use the MaterialPageRoute subclass.
Add an import to the top of
GHFlutterState to pull in the member widget:
import 'memberwidget.dart';
Next add a private method
_pushMember() to
GHFlutterState in the file ghflutterwidget.dart:
_pushMember(Member member) { Navigator.of(context).push( new MaterialPageRoute( builder: (context) => new MemberWidget(member) ) ); }
You’re using Navigator to push a new
MaterialPageRoute onto the stack, and the
MaterialPageRoute is built using your new
MemberWidget.
Now you need to call
_pushMember() when a user taps on a row in the list of members. You can do so by updating the
_buildRow() method in
GHFlutterState and adding an
onTap attribute to the
ListTile:
Widget _buildRow(int i) { return new Padding( padding: const EdgeInsets.all(16.0), child: new ListTile( title: new Text("${_members[i].login}", style: _biggerFont), leading: new CircleAvatar( backgroundColor: Colors.green, backgroundImage: new NetworkImage(_members[i].avatarUrl) ), // Add onTap here: onTap: () { _pushMember(_members[i]); }, ) ); }
When a row is tapped, your new method
_pushMember() is called with the member that was tapped.
Hit F5 to build and run the app. Tap a member row and you should see the member detail screen come up:
And here’s the member screen running on iOS:
Notice that the back button on Android has the Android style and the back button on iOS has the iOS style, and also that the transition style when switching to the new screen matches the platform transition style.
Tapping the back button takes you back to the member list, but what if you want to manually trigger going back from your own button in the app?
Popping the stack
Since navigation in the Flutter app is working like a stack, and you’ve pushed a new screen widget onto the stack, you’ll pop from the stack in order to go back.
Add an IconButton to
MemberState by updating its
build() override to add a Column widget in place of just the Image:
@override Widget build(BuildContext context) { return new Scaffold ( appBar: new AppBar( title: new Text(member.login), ), body: new Padding( padding: new EdgeInsets.all(16.0), // Add Column here: child: new Column( children: [ new Image.network(member.avatarUrl), new IconButton( icon: new Icon(Icons.arrow_back, color: Colors.green, size: 48.0), onPressed: () { Navigator.pop(context); } ), ]), ) ); }
You’ve added the Column in order to layout the Image and an IconButton vertically. For the IconButton, you’ve set its
onPressed value to call
Navigator and pop the stack.
Build and run the app using F5, and you’ll be able to go back to the member list by tapping your new back arrow:
Returning a value
Routes can return values, similar to results obtained in Android using
onActivityResult().
To see a simple example, add the following private
async method to
MemberState:
_showOKScreen(BuildContext context) async { // 1, 2 bool value = await Navigator.of(context).push(new MaterialPageRoute<bool>( builder: (BuildContext context) { return new Padding( padding: const EdgeInsets.all(32.0), // 3 child: new Column( children: [ new GestureDetector( child: new Text('OK'), // 4, 5 onTap: () { Navigator.of(context).pop(true); } ), new GestureDetector( child: new Text('NOT OK'), // 4, 5 onTap: () { Navigator.of(context).pop(false); } ) ]) ); } )); // 6 var alert = new AlertDialog( content: new Text((value != null && value) ? "OK was pressed" : "NOT OK or BACK was pressed"), actions: <Widget>[ new FlatButton( child: new Text('OK'), // 7 onPressed: () { Navigator.of(context).pop(); } ) ], ); // 8 showDialog(context: context, child: alert); }
Here is what’s going on in this method:
- You push a new
MaterialPageRouteonto the stack, this time with a type parameter of
bool.
- You use
awaitwhen pushing the new route, which waits until the route is popped.
- The route you push onto the stack has a Column that shows two text widgets with gesture detectors.
- Tapping on the text widgets causes calls to
Navigatorto pop the new route off the stack.
- In the calls to
pop(), you pass a return value of
trueif the user tapped the “OK” text on the screen, and false if the user tapped “NOT OK”. If the user presses the back button instead, the value returned is null.
- You then create an AlertDialog to show the result returned from the route.
- Note that the AlertDialog itself must be popped off the stack.
- You call
showDialog()to show the alert.
The primary points to note in the above are the
bool type parameter in
MaterialPageRoute<bool>, which you would replace with any other type you want coming back from the route, and the fact that you pass the result back in the call to pop, for example,
Navigator.of(context).pop(true).
Update
build() in
MemberState to have a RaisedButton that calls
_showOKScreen():
@override Widget build(BuildContext context) { return new Scaffold ( appBar: new AppBar( title: new Text(member.login), ), body: new Padding( padding: new EdgeInsets.all(16.0), child: new Column( children: [ new Image.network(member.avatarUrl), new IconButton( icon: new Icon(Icons.arrow_back, color: Colors.green, size: 48.0), onPressed: () { Navigator.pop(context); } ), // Add RaisedButton here: new RaisedButton( child: new Text('PRESS ME'), onPressed: () { _showOKScreen(context); } ) ]), ) ); }
The RaisedButton you’ve added shows the new screen.
Hit F5 to build and run the app, tap the “PRESS ME” button, and then tap either “OK”, “NOT OK”, or the back button. You’ll get a result back from the new screen showing which of the results the user tapped:
Custom Transitions
In order to give the navigation of your app a unique feel, you can create a custom transition. You can either extends classes such as PageRoute, or use a class like PageRouteBuilder that defines custom routes with callbacks.
Replace
_pushMember in
GHFlutterState so that it pushes a new
PageRouteBuilder onto the stack:
_pushMember(Member member) { // 1 Navigator.of(context).push(new PageRouteBuilder( opaque: true, // 2 transitionDuration: const Duration(milliseconds: 1000), // 3 pageBuilder: (BuildContext context, _, __) { return new MemberWidget(member); }, // 4 transitionsBuilder: (_, Animation<double> animation, __, Widget child) { return new FadeTransition( opacity: animation, child: new RotationTransition( turns: new Tween<double>(begin: 0.0, end: 1.0).animate(animation), child: child, ), ); } )); }
Here you:
- Push a new PageRouteBuilder onto the stack.
- Specify the duration using
transitionDuration.
- Create the MemberWidget screen using
pageBuilder.
- Use the
transitionsBuilderattribute to create fade and rotation transitions when showing the new route.
Hit F5 to build and run the app, and see your new transition in action:
Wow! That’s making me a little dizzy! :]
Where to go from here?
You can download the completed project using the download button at the top or bottom of this tutorial.
You can learn more about Flutter navigation by visiting:
- Routing and Navigation in the Flutter docs.
- The Navigator API docs.
As you’re reading the docs, check out in particular how to make named routes, which you call on Navigator using
pushNamed().
Stay tuned for more Flutter tutorials and screencasts!
Feel free to share your feedback, findings or ask any questions in the comments below or in the forums. I hoped you enjoyed learning about navigation with Flutter! | https://www.raywenderlich.com/110-flutter-navigation-tutorial | CC-MAIN-2019-35 | refinedweb | 1,914 | 55.03 |
This is hold my beer level of hackery. It didn't work but we learned lots in the process.
CodeWithSwiz is a weekly live show. Like a podcast with video and fun hacking. Focused on experiments and open source. Join live most Tuesday mornings
The problem
On ServerlessHandbook.dev there is a paywall. Every visitor can read the first 30% of every chapter. To read more you have to buy the book or unlock specific chapters with an email.
This works.
But it doesn't look good. Content's not styled.
You're supposed to see this:
Current working solution
You get unstyled content because of how the paywall works.
// rendering the main content{contentUnlocked ? (<main id="content">{props.children}</main>) : (<main id="content"><SnipContent>{props.children}</SnipContent></main>)}
If content unlocked, show content. If content locked, snip.
The snipping is a hack:
export function SnipContent({ children }) {const html = ReactDOMServer.renderToString(children).split('<div id="lock"></div>')[0]return <div dangerouslySetInnerHTML={{ __html: html }} />}
Take the children, render to a string, split by the lock, take first part, render back out as HTML.
When this happens, we throw out all the styling machinery. ThemeUI doesn't see these components, doesn't add CSS classes.
Wrapping in ThemeUI's root wrapper doesn't work either. It's a CSS-in-JS library and doesn't add global tag-based styling. Needs to see the details.
Attempt 1: Hack React's AST
We have those ThemeUI classes – they're in the
children prop!
The
children prop is a portion of React's AST. An abstract syntax tree that represents the data structure of your React app. React uses this to run DOM events, deal with effects, and render.
Our content hides in the
type: MDXContent node. We weren't able to dig into that.
For a brief moment it looked like we might be able to hack React's Fiber implementation, but that was too much. If I understand correctly, fibers are the syntax tree that React uses, but they're based on functions calling functions. Not a hackable data structure.
Attempt 2: Render to HTML, parse back to JSX
Next idea 👉 what if we render to HTML, snip the content, parse back to JSX, and pass that back to the main rendering machine?
import parseToReact from "html-react-parser"// ...const html = ReactDOMServer.renderToString(children).split('<div id="lock"></div>')[0]const snippedChildren = parseToReact(html)console.log(snippedChildren)return snippedChildren
html-react-parser is a library that takes any HTML and parses it to a JSX string.
And it worked! We got an understandable data structure of our content 🥳
You can look at that and understand what's going on! React components have normal types, there's props, it all makes sense.
Except it doesn't style. ThemeUI doesn't run through these, doesn't add CSS props, doesn't do squat. 🥲
Attempt 3: Render as MDX
What if you took the HTML or the parsed JSX and shoved that in a MDX renderer yourself?
import { MDXProvider } from "@mdx-js/react"import { MDXRenderer } from "gatsby-plugin-mdx"// ...return (<MDXProvider><MDXRenderer>{snippedChildren} // or {html}</MDXRenderer></MDXProvider>)
Cryptic error from the bowels of Gatsby. Something undefined.
MDXProvider specifies components for MDX. Where the styling and any custom machinery hooks into.
MDXRenderer takes a compiled MDX source and renders it as React.
And that's the rub 👉 compiled MDX source. We don't have that. We have random HTML or React components. MDX chokes and dies.
You could compile with
mdx(data) from the
@mdx-js/mdx package, but that depends on Node.js libraries that don't work inside React components. You have to guarantee running on the server.
Including the full MDX runtime in your client code would work ... and destroy your Lighthouse scores. It's a big pile of code.
What I feared: The real solution
We're going to need a real solution – an MDX/Remark plugin that knows how to grab the snipped content from our data source. 💩
Next time!️ | https://swizec.com/blog/hacking-the-react-ast-for-fun-and-profit-codewithswiz-ep34/ | CC-MAIN-2021-43 | refinedweb | 669 | 69.79 |
#include <db.h> int DB→set_flags(DB *db, u_int32_t flags);
Calling DB→set_flags is additive; there is no way to clear flags.
The flags value must be set to 0 or by bitwise inclusively OR’ing together one or more of the following values.
The following flags may be specified for the Btree access method:
DB_DUP Permit duplicate data items in the tree; that is, insertion when the key of the key/data pair being inserted already exists in the tree will be successful. The ordering of duplicates in the tree is determined by the order of insertion, unless the ordering is otherwise specified by use of a cursor operation. It is an error to specify both DB_DUP and DB_RECNUM.
DB_DUPSORT Permit duplicate data items in the ...
No credit card required | https://www.oreilly.com/library/view/berkeley-db/0735710643/0735710643_ch28lev1sec24.html | CC-MAIN-2019-30 | refinedweb | 131 | 53.31 |
What’s new in .NET Productivity with Visual Studio 2022
With the release of Visual Studio 2022 the Roslyn team continues to enhance your .NET developer productivity with the latest tooling improvements.
In this post I’ll cover the following .NET productivity enhancements:
- Navigate to source code
- Stack Trace Explorer
- Naming Styles in the EditorConfig UI
- Sync namespaces from Solution Explorer
- IntelliSense completion for await
- New Code Fixes and Refactorings
Navigation and code exploration
Navigating and exploring code is an integral part of developer productivity. In Visual Studio 2022 we now surface embedded source and Source Link as part of Go To Definition. This allows you to navigate to original source files that declare the target symbol that isn’t in your current solution. Place your cursor on a symbol and press F12 to navigate to the original source code. or use the shortcut Ctrl+E, Ctrl+S.
We added a new command called Value Tracking allowing you to perform data flow analysis on your code to help you quickly determine how certain values might have passed at a given point. Value Tracking is available on any member in the context (right-click) menu by selecting the Track Value Source command. The Track Value Source command will open the Value Tracking window allowing you to analyze results.
We added an option to underline variables that are reassigned. This is off by default so you will need to enable it in Tools > Options > Text Editor > C# or Basic > Advanced and select Underline reassigned variables. This helps reassigned variables stand out from other variables in the same local scope.
The Code Definition Window now supports C# and Visual Basic allowing you to quickly understand and explore code. To use the Code Definition Window, select View > Window > Code Definition. Next, place your cursor on an identifier to navigate and explore code.
In C# 8.0 we introduced nullable reference types allowing you to declare whether null is expected. To use nullable reference types you either need to add the <Nullable>enable</Nullable> element to your project file or add the #nullable enable pragma to every source file in your project. To help streamline this process we now automatically include the <Nullable>enable</Nullable> for new .NET projects. For existing .NET projects that target .NET Core 3.1 or newer, we offer a new refactoring to enable nullable reference types across a project. Place your cursor on a #nullable enable pragma and press (Ctrl+.) to trigger the Quick Actions and Refactorings menu. Select Enable nullable reference types in a project.
IntelliSense completion
IntelliSense is a code-completion aid that includes a number of features: List Members, Parameter Info, Quick Info, and Completion. We recently added IntelliSense completion for await within an awaitable expression. Start typing an awaitable expression and notice how await will now show up in the completion list.
If you want to learn more about recent additions to IntelliSense including AI helping you code, check out the IntelliCode docs.
Code fixes & refactorings
Visual Studio provides hints to help you maintain and modify your code in the form of code fixes and refactorings. These appear as lightbulbs and screwdrivers next to your code or in the margin. The hints can resolve warnings and errors as well as provide suggestions. You can open these suggestions by typing (Ctrl+.) or by clicking on the lightbulb or screwdriver icons.
You can check out the most popular refactorings that are built in to Visual Studio at. We’ve added a bunch of new code fixes and refactorings in Visual Studio 2022! Here are some of our favorites.
- Introduce parameter
- File scoped namespace
- Use tuple to swap values
- Move static members
- Simplify property pattern
- Prefer null check over type check
- Sync namespaces
The introduce parameter refactoring will move an expression from a method implementation to its callers by adding a new parameter. Place your cursor on the line containing the expression or highlight the expression. Press (Ctrl+.) to trigger the Quick Actions and Refactorings menu. Select Introduce parameter for {0} or Introduce parameter for all occurrences of {0}. Both options will have three flyout options to either (1) insert the updated expression at all the call sites, (2) extract and create a new method that returns the expression and adds an argument at the call sites, or (3) create an overload of the method that contains the expression and calls upon the original method.
In C# 10.0 we introduced file-scoped namespace so you no longer need to nest class definitions within a namespace. To use file-scoped namespace, make sure your project targets the .NET 6.0 SDK or you can set the language version in your project file to 10.0. Place your cursor on a namespace. Press (Ctrl+.) to trigger the Quick Actions and Refactorings menu. Select Convert to file-scoped namespace.
There is a new refactoring that detects variable swaps and suggests using a tuple to swap values instead of using a temporary variable in-order to swap arguments. Place your cursor on a temporary variable assignment where you are swapping values. Press (Ctrl+.) to trigger the Quick Actions and Refactorings menu. Select Use tuple to swap values.
You can more easily move static members to a new type. Place your cursor on a static member. Press (Ctrl+.) to trigger the Quick Actions and Refactorings menu. Select Move static members to another type. This will open a dialog where you can select the members that you would like to move.
The simplify code to use the new C# 10.0 extended property patterns refactoring reduces noise allowing you to reference nested members instead of nesting another recursive pattern. Place your cursor on a nested member reference. Press (Ctrl+.) to trigger the Quick Actions and Refactorings menu. Select Simplify property pattern.
There is now a refactoring to prefer is not null over is object when applied to value types. To use this new refactoring place your cursor on a type check. Press (Ctrl+.) to trigger the Quick Actions and Refactorings menu. Select Prefer ‘null’ check over type check.
You can also invoke code fixes and refactorings from the Solution Explorer (right-click) menu. One of our most popular refactorings is sync namespaces allowing you to synchronize namespaces to match your folder structure. The Sync Namespaces command is now available in the (right-click) menu of a project in Solution Explorer. Selecting Sync Namespaces will automatically synchronize namespaces to match your folder structure.
Code style enforcement
Enforcing consistent code style is important as developer teams and their code bases grow. You can configure code styles with EditorConfig! EditorConfig files help to keep your code consistent by defining code styles and formats. These files can live with your code in its repository and use the same source control. This way the style guidance is the same for everyone on your team who clones from that repository. With EditorConfig files you can enable or disable individual .NET coding conventions and configure the severity to which you want each rule enforced.
In Visual Studio 2019 we introduced a new UI for EditorConfig allowing you to easily view and configure every available .NET coding convention. In this release we added Naming Styles to the EditorConfig UI. Naming Styles allow you to enforce naming conventions in your code such as interfaces should always start with the letter “I”. To add an EditorConfig file to a project or solution:
- Right-click on the project or solution name within the Solution Explorer.
- Select Add New Item.
- In the Add New Item dialog, search for EditorConfig.
- Select the .NET EditorConfig template to add an EditorConfig file prepopulated with default options.
You can also add a new EditorConfig file from the command line by typing dotnet new editorconfig. To use dotnet new editorconfig make sure your project targets the .NET 6.0 SDK or later. Next, open the Visual Studio integrated terminal by pressing (Ctrl+`). You can then run dotnet new editorconfig to add a new EditorConfig file to your project.
Thank you
Last, but certainly not least, a big Thank You to the following people who contributed to Roslyn. Ultimately, you’ve helped make Visual Studio 2022 awesome.
- AlFas (@AlFasGD): Fix unused Update parameter (PR #56741)
- Petr Onderka (@svick): Don’t suggest simplification for positional patterns (PR #57676)
- Adam Speight (@AdamSpeight2008): Simplify the Lambda Functions (PR #51731)
- Martin Strecker (@MaStr11):
- Youssef Victor (@Youssef1313):
- Pavel Krymets (@pakrym): Do not normalize leading whitespace in code comments (PR #57414)
- Kev Ritchie (@KevRitchie): Update documentation to explain FullyQualifiedFormat behavior (PR #57397)
- Marcio A Braga (@MarcioAB): Update TextSpan.cs (PR #57300)
- Bernd Baumanns (@bernd5): Fix for “Function pointer invocation has “None” IOperation” (PR #57191)
- Paul M Cohen (@paul1956): Initial fix for issue “VB Formatting of LineContinuation Wrong after _ ‘ Comment” (PR #54559)
- Saleh Ahmed Panna (@panna-ahmed): Localized missing warning message (PR #57502)
Get involved
This was just a sneak peak of what’s new in Visual Studio 2022. For a complete list of what’s new, see the release notes. And feel free to provide feedback on the Developer Community website, or using the Report a Problem tool in Visual Studio.
After all these years, can you at least implement auto-completion of the parentheses pair when entering a method name?
“File.ReadAllTextAsync”; why do I still have to enter the “()” manually? The vast majority of the time, when a method name is typed, it’s to invoke it().
+1, even me personally made proposal to put full call (inc. “;”). Nobody even bothered with that – they are busy with more important BS.
Seems C# team is a bunch of indian students who works on their own. Has MS at least 5 talented developers???
We are tracking that suggestion in if you want to join the discussion.
In the meantime you might like to try out tab-tab snippets which insert the parentheses, as well as take you into an argument completion mode. To try it out, hit tab twice when the completion list is shown.
Why language improvements (C# 9.0+) are NOT available for .NET Framework projects? What the hell so important and unique you have in Core (and don’t have in FW) that you NOT implemented “file-scoped namespace”? (feature which costs you literally nothing to implement)
This seems to be a common misconception – a lot of posts on it (including this one!) states that the language improvements require .NET 6. This is incorrect – all they require is setting langversion to 10 in your project file. I’ve successfully migrated a large .NET 4.7.2 project to file-scoped namespaces and global usings.
The main thing which is different with .NET 6 is that these are enabled by default – targeting .NET 6 implicitly sets langversion to 10, which means you don’t need to add the langversion tag.
Some features require .NET CLR support, such as default interfaces; setting the language version won’t make it possible to use those features.
Some other features require various shims, such as nullable attributes or records; it’s possible to add the necessary types by hand to fill in the blanks.
You’re ignoring the fact that Microsoft explicitly disabled the user’s ability to change the language version in the settings when we used to be able to do it. So it looks like it can’t be done. You had to google around and find articles on how it can be enabled. Most people aren’t going to do that. Plus doing so seems like you’re doing something non-standard, which is exactly the reason it was done: to reduce the overhead of testing and supporting newer C# features in .NET Framework. Some languages features won’t even work. So they’d have to have a way deal with that.
It’s just easier for MS to say not supported because they hardly want anyone to remain on .NET Framework.
The Sync Namespaces command seems a bit dangerous. There is a lot of case i’m not how would it work :
– Class conflict.
– Not loaded project or project in another sln
– Intended case
I prefer a rule in StyleCopAnalyzers.
For code style enforcement. It’s great but with style from the .ruleset and from .editorconfig it’s sometime hard to keep both in sync. It may be great to have one file to configure all code style. Or at least avoid overlap. The usage of ‘var’ instead on the name can be set on both and in different way.
I hope this improvement will be in VSCode too.
I can’t find the Stack Trace Explorer. I have the latest VS 2022. I tried the shortcut or looking for in View > Other Windows and it’s not there.
Hi Henok! Stack Trace Explorer will be available in 17.1 Preview 2. I will keep you posted on when 17.1 Preview 2 officially releases so you can give Stack Trace Explorer a try!
I can’t get the Code Definition Window to work with C#.
Also, as noted in another comment, the Stack Trace Explorer is not accessible from the menus nor from Quick Search; and it also is not available in the list of Commands (Tools -> Options… -> Environment -> Keyboard).
Hi Zehev! The Code Definition Window was released in 17.1 Preview 1. Can you validate that you are on version 17.1 Preview 1?
It’s confusing, Code Definition Window is in version 17.0.4 (release) too.
That would explain it — I’m on 17.0.4. It might be a good idea to clarify that in the article. Both Code Definition Window for C#/VB and Stack Trace Explorer look like features that I would use very heavily.
@Mika Dumont Thanks for continuing effort of making Visual Studio more production.
However, would like to escalate to you attention growing number of reports regarding high memory consumption at Visual Feedback Feedback. Given the high number of reports, it is a issue worth investigation and resolving.
Previously I investigated the memory consumption problem of VS 2022 and found that part of its memory leak could be attributed to unreleased WPF’s `AutomationPeer` instances. As a result, document views can not be released, and the huge Roslyn semantic results (code document objects) behind the view. Since VS is now 64bit, itself uses more memory and the leak is almost doubled versus VS 2019 or prior versions.
Here’s my finding about the leak:
And a related issue in WPF:
Why SQL debugger can’t step-into through VS2022 ?
In contrast, the VS 2019 works correctly.
I noticed that you had created `ConsoleApp335` in the first video.
Why not bring back the “Save new projects when created” option?
Thank you Mika for this great article,
I think the “Code Definition” window must be under View > Code Definition
but in the article, it is View > Window > Code Definition.
Yes, thank you for catching that!
6 July 2016 I created proposal “Intellisense: Insert full method call”. 5.5 YEARS(!!!!) past in stupid blah-blah and MS just now included feature in 17.2 milestone. Are you serious?! 5.5 years you discussing “do we need full method completion”?? If I were your boss, you’ll be fired in 1 year for wasting company resources and time.
Feature I suggested affects EVERY SINGLE CODER at every single minute of typing in IDE. What can be more important than this?!?! Productivity is #1 task which you worry about! Not icons, but productivity, hey!
And even now, if feature will be implemented, I’m 200% sure it will be done most idiotic way. I suggested “you type object.fu”, then intellisense popup appears, you select “function” and by ENTER you have in code “object.function();”. I simply GUARANTEE that MS will screw up this idea and do something different, remember my words.
And hell, remove ban from WrongBit – it’s me who gave you idea what you MUST implement! But because you annoy me by your stupidity and make nervous, I wrote what I think about MS team. It’s my freedom of speech, you cannot ban me for that! YOU are guilty you have so unqualified team (who even offered _ME_ to implement feature, instead of doing by MS team!!!).
Sync Namespaces also seems like a bad idea in general, since the folder structure serves Solution Explorer while namespaces serve IntelliSense type-ahead. Fine grained name spaces mean type-ahead seldom works. I’d rather not offer easy support that paradigm in the IDE.
“Go To Definition” to my own NuGet package with embedded source files does not work for me. VS2022 only shows the function headers. My library was built with the “EmbedAllSources” property set to “true” in the csproj file and
“dotnet pack –configuration Release –include-symbols”
Is there anything else I have to do to make this feature work?
My aim is to publish the NuGet package to our companies private NuGet directory on our own file server.
It’s 21/Jan/2022 and I have Visual Studio 2022 Community v17.0.5. None of these features work (e.g. no “stack trace explorer” window, and the “await” Intellisense doesn’t work). Which version of Visual Studio 2022 is required for these to work? | https://devblogs.microsoft.com/visualstudio/whats-new-in-net-productivity-with-visual-studio-2022/ | CC-MAIN-2022-40 | refinedweb | 2,879 | 65.93 |
Using just whats on the standard Raspbian image the easiest way, IMO, is to use Pygame.
This small code snippet below shows you how. Just put the wav file in the same place as your program.
import pygame from time import sleep #Initialise pygame and the mixer pygame.init() pygame.mixer.init() #load the sound file mysound = pygame.mixer.Sound("mysound.wav") #play the sound file for 10 seconds and then stop it mysound.play() time.sleep(10) mysound.stop()
You will have to use wav files, as opposed to other sounds files such as mp3, ogg, etc - use media.io to convert them.
We have used a variety of wav files in Python. Some work and some don't. We discovered that exporting them from Audacity seems to eliminate problems. Is there a way in Python to ensure that sounds play correctly? FYI, we're using the Pygame library.
Interesting, I have never had a problem with *.wav files. Mp3's, ogg and other formats, yes lots of problems.
Yes, there are lots of problems, but will give these codes a try so I can I play a sound file using Python as easy as 123. If you're looking for website to buy an essay online cheap, check this out.
This comment has been removed by the author. | http://www.stuffaboutcode.com/2016/05/raspberry-pi-playing-sound-file-with.html | CC-MAIN-2017-17 | refinedweb | 220 | 85.39 |
Chris Withers wrote: >> That's how escaping works, be it in XML, encodings, compression, whatever. > > Well yes and no. I'd expect escaping to work such that whatever we're > dealing with can be round tripped, ie: parsed, serialiazed, parsed > again, etc. that's exactly how it works in ET, of course. you put Python strings in the tree, the ET parsers and serializers take care of the rest. elem = ET.Element("tag") elem.text = value # ASCII or Unicode string ... write to disk ... ... read it back ... assert elem.text == value >> You can read the SGML spec regarding CDATA. > > Not sure what that's supposed to mean. CDATA for me means stuff inside a > <![CDATA[ ]]> section._escape_cdata is used for everything inside any > tag that isn't another tag. cdata is character data; see that's not the same thing as a "CDATA section" (which is just one of several ways to store character data in an XML file). how things are stored doesn't matter; that's just a serialization detail: What is not in the Information Set 6. Whether characters are represented by character references. 19. The boundaries of CDATA marked sections. ... >. if you want to insert literal XML fragments in an ET tree, use the XML factory function: fragment = "<tag>...</tag>" elem.append(ET.XML(fragment)) if you want to embed HTML fragments in an ET tree, use ElementTidy or ElementSoup (or equivalent) to turn the fragment into properly nested and properly namespaced XHTML. if you want to do unstructured string handling, use a template library or Python strings. don't use an XML library if you don't want to work with XML. > That's true, sometimes. That inserted lump may have come from a process > which can only spit out perfect html fragments, in which case you're > fine, or it may come from user input, in which case you're doomed but > will likely have happy customers ;-) the hackers will be happy, at least: </F> | https://mail.python.org/pipermail/xml-sig/2007-November/011831.html | CC-MAIN-2017-39 | refinedweb | 330 | 74.29 |
Very pythonic progress dialogs.
Sometimes, you see a piece of code and it just feels right. Here's an example I found when doing my "Import Antigravity" session for PyDay Buenos Aires: the
progressbar module.
Here's an example that will teach you enough to use
progressbar effectively:
progress = ProgressBar() for i in progress(range(80)): time.sleep(0.01)
Yes, that's it, you will get a nice ASCII progress bar that goes across the terminal, supports resizing and moves as you iterate from 0 to 79.
The
progressbar module even lets you do fancier things like ETA or fie transfer speeds, all just as nicely.
Isn't that code just right? You want a progress bar for that loop? Wrap it and you have one! And of course since I am a PyQt programmer, how could I make PyQt have something as right as that?
Here'show the output looks like:
You can do this with every toolkit, and you probably should!. It has one extra feature: you can interrupt the iteration. Here's the (short) code:
# -*- coding: utf-8 -*- import sys, time from PyQt4 import QtCore, QtGui def progress(data, *args): it=iter(data) widget = QtGui.QProgressDialog(*args+(0,it.__length_hint__())) c=0 for v in it: QtCore.QCoreApplication.instance().processEvents() if widget.wasCanceled(): raise StopIteration c+=1 widget.setValue(c) yield(v) if __name__ == "__main__": app = QtGui.QApplication(sys.argv) # Do something slow for x in progress(xrange(50),"Show Progress", "Stop the madness!"): time.sleep(.2)
Have fun!
Really nice one, but I would not use __length_hint__ because, as far as I know, is not documented and could disappear in any future version. Why not len(data)?
What about killing the c variable and writing something like: widget.setValue(widget.value()+1).
I did a quick profiling and checked that cpu time is almost identical (actually a slight advanced on this last version, though I can't say why...)
Which one is more pythonic?
Best Regards
Indeed len(data) is saner!
I just got confused becaue iter(data) had no length anymore!
I prefer the explicit counter because it looks more clear to me, but it's not a big difference :-)
Wow! It took me two readings of the code to realize it was a sequence wrapper that does not modify the sequence — I have never seen one of those before! Very elegant.
Thanks! Of course it was not my idea :-)
Will it convert an iterator into a list to measure its length? E.g. in your QT example where you use xrange.
I don't think so. But how could I test it?
The way you use __length_hint__ now doesn't convert it to a list. If you change that to len(list(it)) then of course it will.
You can test it by creating a large text file (say 100mb) and using file.xreadlines to iterate over them. If the memory usage of the program stays low - it is not creating a list. If you are then it will need to allocate 100mb of memory.
You can also test it more directly by giving it an iterator with side effects.
def aniterator():
for n in range(5):
print("yielding", n)
yield n
for x in progress(aniterator()):
print("processing", x)
If you get all yields before processing, it is creating a list. If it is not, you will get alternating "yielding" and "processing" messages.
I think len(data) won't work with xrange, for example, which is probably why he is using __length__hint__. Perhaps he should default back to len(list(iter)).
Also... on hitting cancel, it seems you might want to raise an error other than StopIteration, because with StopIteration, its hard to tell the difference outside between simply hitting the end of the loop and 'cancel' being called...
Actually len(xrange(100)) does work. I am not sure why I am using __length__hint__ I wrote this in a hurry :-)
Yes, what exception to throw is a matter of taste.
Why throw an exception, rather than just returning? | http://ralsina.me/weblog/posts/BB917.html | CC-MAIN-2020-34 | refinedweb | 676 | 66.64 |
Structuring an Elixir+Phoenix App
I’ve
mix phx.new ed many applications and when doing so I often start with wondering how to organize my code. I love how Phoenix pushes you to think about the different domains in your app via generators while at the same time I have the freedom to organize modules on my own. Ecto schemas make for a nice abstraction, but where should I put code related just to that table? It could be in the context, but I don’t want the context to become a “grab bag” of unorganized function calls.
In the past, I’ve searched for someone writing on the subject but haven’t come up with much. I’ve even done some cursory glancing into repositories to get a feeling for what they do, but I’ve never looked thoroughly at different options. In this post, I share what I have found from four different open source Phoenix+Ecto applications. And as the old joke goes, I’ll be asking four developers for their opinions and getting four different answers. In the end, I’ll summarize how I plan to move forward.
Notes:
- Phoenix has evolved in how modules are organized, most notably splitting into
my_appand
my_app_webfolders and with the concept of contexts. Some of these applications were created with early versions of Phoenix which could explain some of the differences.
- When I say “typical Ecto schema logic” below, I’m referring to examples in the Ecto documentation and the community on the things to put into schema files (field definitions, schema attributes (such as
@primary_key,
@schema_prefix, etc…), and changeset logic)
Avia
Repository description: “open source e-commerce framework”
A lot of the business logic can be found under
apps/snitch_core/lib/core. There is a
domain folder containing what appears to be the front-end API modules (what Phoenix might call “contexts”). Next to the
domain folder is a
data folder containing
schema and
model directories.
The
schema directory contains typical Ecto schema files. The
model directory contains correspondingly named modules with CRUD functions (like
create,
update,
delete,
get) but also occasionally some helper functions related to those domain objects (functions like
formatted_list or
get_all_by_shipping_category )
Each type of module also has a
use statement at the top (i.e.
use Snitch.Data.Model ) referring to a module containing shared logic. It’s worth looking at what that shared logic is:
# apps/snitch_core/lib/core/domain/domain.ex
alias Ecto.Multi
alias Snitch.Data.{Model, Schema}
alias Snitch.Domain
alias Snitch.Core.Tools.MultiTenancy.Repo# apps/snitch_core/lib/core/data/model/model.ex
import Ecto.Query
alias Snitch.Core.Tools.MultiTenancy.Repo
alias Snitch.Tools
alias Tools.Helper.Query, as: QH# apps/snitch_core/lib/core/data/schema/schema.ex
use Ecto.Schema
import Ecto.Changeset
import Snitch.Tools.Validations
alias Snitch.Core.Tools.MultiTenancy.Repo
The domain modules alias the model modules and the model modules alias the schema modules, indicating the usage pattern of going deeper (Domain -> Model -> Schema):
# apps/snitch_core/lib/core/domain/stock/stock_location.ex
alias Model.StockLocation, as: StockLocationModel# apps/snitch_core/lib/core/data/model/stock/stock_location.ex
alias Snitch.Data.Schema.StockLocation, as: StockLocationSchema
Changelog
Repository description: This is the CMS behind changelog.com.
The business logic is under
lib/changelog . This directory seems to contain various modules as well as directories containing grouped functionality. All of the Ecto logic looks to be under the
schema directory which contains some base schema modules as well as directories containing grouped schema functionality.
Schemas have the typical Ecto schema logic but also sometimes many helpers like
admins,
with_email,
get_by_website which are scoping/querying as well as defining changeset functions like
auth_changeset,
admin_insert_changeset,
admin_update_changeset,
file_changeset, etc…
The schemas use the
Changelog.Schema module which, in addition to adding many helper functions like
any?,
by_position,
limit,
newest_first,
newest_last, etc…, does this:
use Ecto.Schema
use Arc.Ecto.Schemaimport Ecto
import Ecto.Changeset
import Ecto.Query, only: [from: 1, from: 2]
import EctoEnum, only: [defenum: 2]alias Changelog.{Hashid, Repo}
Hexpm
Repository description: API server and website for Hex
The
lib/hexpm directory contains some modules with basic logic, but the schemas and contexts exist inside of grouping folders. For example, the
lib/hexpm/accounts folder has the
User schema and the
Users context as well as the
Organization schema and the
Organizations context. The singular modules (i.e.
User and
Organization) have the typical Ecto schema logic.
The two types of module
use the
Hexpm.Schema and
Hexpm.Context modules:
# lib/hexpm/schema.ex
import Ecto
import Ecto.Changeset
import Ecto.Query, only: [from: 1, from: 2]
import Hexpm.Changesetalias Ecto.Multiuse Hexpm.Shared# lib/hexpm/context.ex
import Ecto
import Ecto.Changeset
import Ecto.Query, only: [from: 1, from: 2]import Hexpm.Accounts.AuditLog,
only: [audit: 3, audit: 4, audit_many: 4, audit_with_user: 4]alias Ecto.Multi
alias Hexpm.Repouse Hexpm.Shared
You might have noticed that both
use the
Hexpm.Shared module. This just does a lot of aliases which means that modules like
Hexpm.Accounts.AuditLog and
Hexpm.Repository.Download become just
AuditLog and
Download …
While that pattern seems common, it’s not always the case. There is an
Auth module which is just a plain module as well as
UserHandles and
Hexpm.Accounts.Email actually seems to be used in the
Hexpm.Emails and
Hexpm.Emails.Bamboo, which seems to be a case of one context reaching into another.
elixirstatus-web
Repository description: Community site for Elixir project/blog post/version updates
At the root of this project, there are
lib and
web directories. The schemas are located under
web/models. This appears to be a pretty old app (the
LICENSE file is five years old), which is probably why it’s not using the recent pattern of putting business logic outside of the “web” part of the app.
The
models directory contains four schemas (
Impression,
Posting,
ShortLink, and
User) which all define typical Ecto schema logic. These all
use ElixirStatus.Web, :model which does:
use Ecto.Schema
import Ecto
import Ecto.Changeset
Another module under
web/models is
Avatar which doesn’t seem to be a schema but rather a grouping of helper functions.
As an example of an context-like module, the
Impressionist module (stored at
lib/elixir_status/impressionist.ex) defines various querying methods for the
Impression schema along with some other helpers.
My thoughts:
I already like Phoenix conventions like:
• Separating business logic from the web application logic
• Separating business logic into contexts with well-established APIs
• Ecto schema modules which are focused on mapping and validation of the data source
Things I like about these projects:
• It’s very nice to have modules headed with something like
use MyApp.Schema or
use MyApp.Context as the Hexpm project does. Even if the
used module doesn’t do much, it provides an at-a-glance label when browsing files.
• I like that Hexpm has established a bit of a convention around schemas (singular
User) vs contexts (plural
Users).
• I like how the Avia project separates “domain”, “model”, and “schema”. In particular as a fan of Domain Driven Design using the word “domain” is nice and I think it’s used in the same way.
Things I don’t like from these projects:
• Aliasing the right-most module in a path (as the Avia project does) drops it’s context. If
Hexpm.Accounts.AuditLog is aliased as
AuditLog, that might not be so bad because
AuditLog is potentially a unique concept. But aliasing
Hexpm.Repository.Download as
Download could confuse. If you alias
Hexpm.Accounts or
Hexpm.Repository you can refer to
Accounts.AuditLog or
Repository.Download which I find clearer.
• In the Avia project sometimes there are aliases like
Model.StockLocation aliased as
StockLocationModel . I would find it simpler to just refer to
Model.StockLocation which is one character longer but makes the source clearer.
• In hexpm the schema vs context convention doesn’t help when browsing a directory to distinguish schemas from plain module files.
As a long-time Rails developer, one thing that makes Rails nice is being able to go between apps easily because there is always a place for everything. But as an app grows large, grouping files by type means that directories like
controllers and
models get very full. The Phoenix project, I think trying to learn from Rails, encourages using contexts with well-defined APIs. Since each context often needs to solve different problems (such as wrapping a database, creating an API client, or just doing calculations), these can be structured however you like. But when it makes sense I think that we could create directories according to conventions to organize our code. For a long time, many projects have established loose conventions with directories like
lib,
docs,
log, and
test. In the web part of a Phoenix application, we have
controller,
channel,
view, etc…
We could do the same in the very common case where our contexts contain Ecto database logic. We are given the “schema” idea from Ecto itself as a way to separate transformation and validation logic. This helps us trim the fat from our “fat model” problem. But we’re left to put other query logic either into our schema or to have it mixed it with all of our context’s business logic.
So after my investigation, the way that I plan to move forward:
# The context’s public API, headed with `use MyApp.Context`
my_app/<context>.ex# Headed with `use MyApp.Schema`
my_app/<context>/schema/user.ex# Headed with `use MyApp.Query`
my_app/<context>/query/user.ex# For non-DB business logic
my_app/<context>/<some_module>.ex
my_app/<context>/<some_module>/<sub_module>.ex
These things might certainly change, but having looked through some other codebases and reflecting on what I like and don’t like, I think that this will be a good start. | https://medium.com/fishbrain/structuring-an-elixir-phoenix-app-e32de2919f9a | CC-MAIN-2021-49 | refinedweb | 1,642 | 50.33 |
At first I want to thank Mr. Tomas Franzon as I used his help a lot in this project. This code opens and closes 0x378 port ( printer port ) manually. In XP we have no permission to access external port like Printer. Here we copy a sys file to system32 and use it in the code so we can send and receive data through this port.
For using the code you have to copy PortAccess.sys ( exists in source ) to system32\drivers. Then you can run the program.
For sending and receiving data we do this :
#include <conio.h> #include <math.h> // // for sending data _outp(Port_Address,iByte); //
Return Value:
This function returns data output . There is no error return.
Parameters:
Unsigned short Port Number ( defined 0x378 )
Int Output value
// for receiving data iByte=_inp(Port_Address);
Return Value:
This function returns the byte, word or int from port. There is no error report
Parameters:
Unsigned short Port Number ( defined 0x378 )
When you check a check box the
iBitOut value is Set or Reset.
// to set or reset a bit iBitOut[i]=1-iBitOut[i];
When you click output button the Bits are changed to
int value to send to port.
// to send value to port iByte=0; for (i=0;i<=7;i++) { iTemp=pow(2,i); nIndex=iTemp*iBitOut[i]; iByte=iByte+nIndex; } _outp(Port_Address,iByte);
When you click input button the input value is read and
iBitIn[i] becomes set or reset.
// to get data from port iByte=_inp(Port_Address); for (i=7;i>=0;i--) { nIndex=pow(2,i); if (iByte>=nIndex) { iByte=iByte-nIndex; iBitIn[i]=1; } else iBitIn[i]=0; }
For checking a check box we use code below :
// to check a check box if (iBitIn[0]==1) CheckDlgButton(hDlg,IDC_CHECKINPUT1,BST_CHECKED);
I always work with peripheral devices. In these cases, communicating in parallel mode is more flexible and easier than the other modes like serial communication. So I am forced to go to this field. You can send and receive
int,
long,
Byte,
unsigned short,
unsigned char etc as easy as possible.
General
News
Question
Answer
Joke
Rant
Admin | http://www.codeproject.com/KB/system/AsefPortAccess.aspx | crawl-002 | refinedweb | 354 | 65.22 |
Vue.js is a rapidly growing front-end framework for
JavaScript, inspired by
Angular.js,
Reactive.js, and
Rivets.js that offers simplistic user-interface design, manipulation, and deep reactivity.
It is described as a
MVVM patterned framework,
Model-View View-Model, which is based on the concept of two-way binding data to components and views. It is incredibly fast, exceeding speeds of other top-tier
JS frameworks, and very user friendly for easy integration and prototyping.
To start using Vue.js, make sure you have the script file included in your HTML. For example, add the following to your HTML.
<script src=""></script>
<div id="app"> {{ message }} </div>
new Vue({ el: '#app', data: { message: 'Hello Vue.js!' } })
See a live demo of this example.
You might also want to check out the "Hello World" example made by Vue.js.
VueJS can be used to easily handle user input as well, and the two way binding using v-model makes it really easy to change data easily.
HTML :
<script src=""></script> <div id="app"> {{message}} <input v- </div>
JS :
new Vue({ el: '#app', data: { message: 'Hello Vue.js!' } })
It is very easy to do a two-way binding in VueJS using
v-model directive.
JSX is not meant to be interpreted by the browser. It must be first transpiled into standard Javascript. To use JSX you need to install the plugin for babel
babel-plugin-transform-vue-JSX
Run the Command below:
npm install babel-plugin-syntax-jsx babel-plugin-transform-vue-jsx babel-helper-vue-jsx-merge-props --save-dev
and add it to your
.babelrc like this:
{ "presets": ["es2015"], "plugins": ["transform-vue-jsx"] }
Sample code with VUE JSX:
import Vue from 'vue' import App from './App.vue' new Vue({ el: '#app', methods: { handleClick () { alert('Hello!') } }, render (h) { return ( <div> <h1 on-click={this.handleClick}>Hello from JSX</h1> <p> Hello World </p> </div> ) } })
By using JSX you can write concise HTML/XML-like structures in the same file as you write JavaScript code.
Congratulations, You're Done :) | http://riptutorial.com/vue-js/topic/1057/getting-started-with-vue-js | CC-MAIN-2018-22 | refinedweb | 340 | 58.08 |
# Disable API/Database
Did you know you could deploy your Redwood app without an API layer or database? Maybe you have a simple static site that doesn't need any external data, or you only need to digest a simple JSON data structure that changes infrequently. So infrequently that changing the data can mean just editing a plain text file and deploying your site again.
Let's take a look at these scenarios and how you can get them working with Redwood.
# Assumptions
We assume you're deploying to Netlify in this recipe. Your mileage may vary for other providers or a custom build process.
# Remove the /api directory
Just delete the
/api directory altogether and your app will still work in dev mode:
rm -rf api
You can also run
yarn install to cleanup those packages that aren't used any more.
# Turn off the API build process
When it comes time to deploy, we need to let Netlify know that it shouldn't bother trying to look for any code to turn into AWS Lambda functions.
Open up
netlify.toml. We're going to comment out one line:
[build] command = "yarn rw build" publish = "web/dist" # functions = "api/dist/functions" [dev] command = "yarn rw dev" [[redirects]] from = "/*" to = "/index.html" status = 200
If you just have a static site that doesn't need any data access at all (even our simple JSON file discussed above) then you're done! Keep reading to see how you can access a local data store that we'll deploy along with the web side of our app.
# Local JSON Fetch
Let's display a graph of the weather forecast for the week of Jan 30, 2017 in Moscow, Russia. If this seems like a strangely specific scenario it's because that's the example data we can quickly get from the OpenWeather API. Get the JSON data here or copy the following and save it to a file at
web/public/forecast.json:
{ "cod": "200", "message": 0, "city": { "geoname_id": 524901, "name": "Moscow", "lat": 55.7522, "lon": 37.6156, "country": "RU", "iso2": "RU", "type": "city", "population": 0 }, "cnt": 7, "list": [ { "dt": 1485766800, "temp": { "day": 262.65, "min": 261.41, "max": 262.65, "night": 261.41, "eve": 262.65, "morn": 262.65 }, "pressure": 1024.53, "humidity": 76, "weather": [ { "id": 800, "main": "Clear", "description": "sky is clear", "icon": "01d" } ], "speed": 4.57, "deg": 225, "clouds": 0, "snow": 0.01 }, { "dt": 1485853200, "temp": { "day": 262.31, "min": 260.98, "max": 265.44, "night": 265.44, "eve": 264.18, "morn": 261.46 }, "pressure": 1018.1, "humidity": 91, "weather": [ { "id": 600, "main": "Snow", "description": "light snow", "icon": "13d" } ], "speed": 4.1, "deg": 249, "clouds": 88, "snow": 1.44 }, { "dt": 1485939600, "temp": { "day": 270.27, "min": 266.9, "max": 270.59, "night": 268.06, "eve": 269.66, "morn": 266.9 }, "pressure": 1010.85, "humidity": 92, "weather": [ { "id": 600, "main": "Snow", "description": "light snow", "icon": "13d" } ], "speed": 4.53, "deg": 298, "clouds": 64, "snow": 0.92 }, { "dt": 1486026000, "temp": { "day": 263.46, "min": 255.19, "max": 264.02, "night": 255.59, "eve": 259.68, "morn": 263.38 }, "pressure": 1019.32, "humidity": 84, "weather": [ { "id": 800, "main": "Clear", "description": "sky is clear", "icon": "01d" } ], "speed": 3.06, "deg": 344, "clouds": 0 }, { "dt": 1486112400, "temp": { "day": 265.69, "min": 256.55, "max": 266, "night": 256.55, "eve": 260.09, "morn": 266 }, "pressure": 1012.2, "humidity": 0, "weather": [ { "id": 600, "main": "Snow", "description": "light snow", "icon": "13d" } ], "speed": 7.35, "deg": 24, "clouds": 45, "snow": 0.21 }, { "dt": 1486198800, "temp": { "day": 259.95, "min": 254.73, "max": 259.95, "night": 257.13, "eve": 254.73, "morn": 257.02 }, "pressure": 1029.5, "humidity": 0, "weather": [ { "id": 800, "main": "Clear", "description": "sky is clear", "icon": "01d" } ], "speed": 2.6, "deg": 331, "clouds": 29 }, { "dt": 1486285200, "temp": { "day": 263.13, "min": 259.11, "max": 263.13, "night": 262.01, "eve": 261.32, "morn": 259.11 }, "pressure": 1023.21, "humidity": 0, "weather": [ { "id": 600, "main": "Snow", "description": "light snow", "icon": "13d" } ], "speed": 5.33, "deg": 234, "clouds": 46, "snow": 0.04 } ] }
Any files that you put in
web/public will be served by Netlify, skipping any build process.
Next let's have a React component get that data remotely and then display it on a page. For this example we'll generate a homepage:
yarn rw generate page home /
Next we'll use the browser's builtin
fetch() function to get the data and then we'll just dump it to the screen to make sure it works:
import { useState, useEffect } from 'react' const HomePage = () => { const [forecast, setForecast] = useState({}) useEffect(() => { fetch('/forecast.json') .then((response) => response.json()) .then((json) => setForecast(json)) }, []) return <div>{JSON.stringify(forecast)}</div> } export default HomePage
We use
useState to keep track of the forecast data and
useEffect to actually trigger the loading of the data when the component mounts. Now we just need a graph! Let's add chart.js for some simple graphing:
yarn workspace web add chart.js
Let's generate a sample graph:
import { useState, useEffect, useRef } from 'react'import Chart from 'chart.js' const HomePage = () => { const chartRef = useRef() const [forecast, setForecast] = useState({}) useEffect(() => { fetch('/forecast.json') .then((response) => response.json()) .then((json) => setForecast(json)) }, []) useEffect(() => { new Chart(chartRef.current.getContext('2d'), { type: 'line', data: { labels: ['Jan', 'Feb', 'March'], datasets: [ { label: 'High', data: [86, 67, 91], }, { label: 'Low', data: [45, 43, 55], }, ], }, }) }, [forecast]) return <canvas ref={chartRef} />} export default HomePage
If that looks good then all that's left is to transform the weather data JSON into the format that Chart.js wants. Here's the final
HomePage including a couple of functions to transform our data and display the dates properly:
import { useState, useEffect, useRef } from 'react' import Chart from 'chart.js' const MONTHS = [ 'Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec', ] const getDates = (forecast) => { return forecast.list.map((entry) => { const date = new Date(0) date.setUTCSeconds(entry.dt) return `${MONTHS[date.getMonth()]} ${date.getDate()}` }) } const getTemps = (forecast) => { return [ { label: 'High', data: forecast.list.map((entry) => kelvinToFahrenheit(entry.temp.max)), borderColor: 'red', backgroundColor: 'transparent', }, { label: 'Low', data: forecast.list.map((entry) => kelvinToFahrenheit(entry.temp.min)), borderColor: 'blue', backgroundColor: 'transparent', }, ] } const kelvinToFahrenheit = (temp) => { return ((temp - 273.15) * 9) / 5 + 32 } const HomePage = () => { const chartRef = useRef() const [forecast, setForecast] = useState(null) useEffect(() => { fetch('/forecast.json') .then((response) => response.json()) .then((json) => setForecast(json)) }, []) useEffect(() => { if (forecast) { new Chart(chartRef.current.getContext('2d'), { type: 'line', data: { labels: getDates(forecast), datasets: getTemps(forecast), }, }) } }, [forecast]) return <canvas ref={chartRef} /> } export default HomePage
If you got all of that right then you should see:
All that's left is to deploy it to the world!
# Wrapping Up
Although we think Redwood will make app developers' lives easier when they need to talk to a database or third party API, it can be used with static sites and even hybrid sites like this when you want to digest and display data, but from a static file at your own URL. | https://redwoodjs.com/cookbook/disable-api-database | CC-MAIN-2020-45 | refinedweb | 1,159 | 69.68 |
import "android.googlesource.com/platform/tools/gpu/ringbuffer"
Package ringbuffer implements an in-memory circular buffer conforming to the io.ReadWriteCloser interface.
func New(capacity int) io.ReadWriteCloser
New constructs a new ring-buffer with the specified capacity in bytes.
Writes to the ring-buffer will block until all the bytes are written into the buffer or the buffer is closed (whichever comes first.) Reads from the ring-buffer will block until a single byte is read or the buffer is closed (whichever comes first.) It is safe to call Read and Write in parallel with each other or with Close. If the ring-buffer is closed while a read or write is in progress then io.ErrClosedPipe will be returned by the read / write function. | https://android.googlesource.com/platform/tools/gpu/+/refs/heads/studio-1.3-release/ringbuffer/ | CC-MAIN-2022-27 | refinedweb | 126 | 57.37 |
Hello,
I've searched the python sites and help, library's and all the forums I could find, but haven't seen any mention of this. This forum seemed like a good place to ask this.
I am on an HP laptop Intel Core 2 Duo, running Windows 7 Pro SP1 32Bit. I am using Python 2.7.3.
I have an application I built that ran fine on Windows XP, but now fails on Windows 7. The place I'm encountering the problem is where I try to read a key from the registry. I believe it's because of the Virtualization of the registry on Windows 7. This key is created by another app that I'm trying to co-ordinate with. On Windows XP the Registry key was:
[HKEY_LOCAL_MACHINE\SOFTWARE\Interface Software\ConnMgr]
"DB Path"="C:\\Documents and Settings\\All Users\\Application Data\\<path to a data file>"
When this app is installed on Windows 7, the key is directed to the registry Virtual Store at:
[HKEY_CURRENT_USER\Software\Classes\VirtualStore\MACHINE\SOFTWARE\Interface Software\ConnMgr]
"DB Path"="C:\\ProgramData\\EnvisionWare\\<path to a data file>"
So far that is what I think I'd expected on Windows 7 and the virtualization of the registry.
The code fragment that is reading the registry is:
-----
from _winreg import *
ConnKey = OpenKey(HKEY_LOCAL_MACHINE, r'SOFTWARE\Interface Software\ConnMgr', 0, KEY_READ)
ConnValue = QueryValueEx(ConnKey, "DB Path")
EWDataSource = os.path.split(str(ConnValue[0]))
------
The OpenKey fails with the message: WindowsError: (2, 'The system cannot find the file specified"). I believe this is because the key does not exist at the path [HKEY_LOCAL_MACHINE\SOFTWARE\Interface Software\ConnMgr].
After all this, the question is: Why isn't the OpenKey call being redirected to the VirtualStore? What can I change in the program, ACLs or other to make it be redirected?
Any help would be appreciated.
Thanks,
John | http://forums.devshed.com/python-programming/946214-accessing-windows-7-registry-virtualstore-last-post.html | CC-MAIN-2014-15 | refinedweb | 313 | 53.21 |
Language in C Interview Questions and Answers
Ques 61. How do I use swab( ) in my program ?
Ans. The function swab( ) swaps the adjacent bytes of memory. It copies the bytes from source string to the target string, provided that the number of characters in the source string is even. While copying, it swaps the bytes which are then assigned to the target string.
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
main ( )
{
char *str1 = "hS eesll snsiasl not eh es as oher " ;
char *str2 ;
clrscr( ) ;
swab ( str1, str2, strlen ( str1 ) ) ;
printf ( "The target string is : %s\n", str2 ) ; // output -- She sells
snails on the sea shore
getch( ) ;
}
Is it helpful? Add Comment View Comments
Ques 62. What does the error "Null Pointer Assignment" mean and what causes this error?
Ans. The Null Pointer Assignment error is generated only in small and medium memory models. This error occurs in programs which attempt to change the bottom of the data segment. In Borland's C or C++ compilers, Borland places four zero bytes at the bottom of the data segment, followed by the Borland copyright notice "Borland C++ - Copyright 1991 Borland Intl.". In the small and medium memory models, a null pointer points to DS:0000. Thus assigning a value to the memory referenced by this pointer will overwrite the first zero byte in the data segment. At program termination, the four zeros and the copyright banner are checked. If either has been modified, then the Null Pointer Assignment error is generated. Note that the pointer may not truly be null, but may be a wild pointer that references these key areas in the data segment.
Data Structures
Is it helpful? Add Comment View Comments
Ques 63. How to build an expression trees ?
Ans. An expression tree is a binary tree which is built from simple operands and operators of an (arithmetic or logical ) expression by placing simple operands as the leaves of a binary tree and the operators as the interior nodes. If an operator is binary , then it has two nonempty subtrees, that are its left and right operands (either simple operands or sub expressions). If an operator is unary, then only one of its subtrees is nonempty, the one on the left or right according as the operator is written on the right or left of its operand. We traditionally write some unary operators to the left of their operands, such as "-" ( unary negation) or the standard functions like log( ), sin( ) etc. Others are written on the right, such as the factorial function ()!. If the operator is written on the left, then in the expression tree we take its left subtree as empty. If it appears on the right, then its right subtree will be empty. An example of an expression tree is shown below for the expression ( -a < b ) or ( c + d ) .
Is it helpful? Add Comment View Comments
Ques 64. Can we get the remainder of a floating point division ?
Ans. Yes. Although the % operator fails to work on float numbers we can still get the remainder of floating point division by using a function fmod( ). The fmod( ) function divides the two float numbers passed to it as parameters and returns the remainder as a floating-point value. Following program shows fmod( ) function at work.
#include <math.h>
main( )
{
printf ( "%f", fmod ( 5.15, 3.0 ) ) ;
}
The above code snippet would give the output as 2.150000.
Is it helpful? Add Comment View Comments
Ques 65. How to extract the integer part and a fractional part of a floating point number?
Ans. C function modf( ) can be used to get the integer and fractional part of a floating point.
#include "math.h"
main( )
{
double val, i, f ;
val = 5.15 ;
f = modf ( val, &i ) ;
printf ( "\nFor the value %f integer part = %f and fractional part = %f",
val, i, f ) ;
}
The output of the above program will be:
For the value 5.150000 integer part = 5.000000 and fractional part =
0.150000
Is it helpful? Add Comment View Comments
3) How to build an expression trees ?
4) Can we get the remainder of a floating point division ?
5) How to extract the integer part and a fractional part of a floating point number?
" />? | http://www.withoutbook.com/Technology.php?tech=11&page=13&subject=Interview%20Questions%20and%20Answers | CC-MAIN-2020-24 | refinedweb | 707 | 74.08 |
Hello, I'm having a slight problem with my code. The task is to create an indexing program, similar to the ones the google uses.
The problem i'm having is that we have to remove the common ending from the words left after the removal of stop_words(which is a list variable not a string variable). I proceed to convert every item in the list, one at a time, to a string with code as follows
leaf_words = "s","es","ed","er","ly","ing" for words in line_stop_words: #line_stop_words is the list of words without any "stop words" present, eg only the essential info stemming_word = "" for chars in words: print chars stemming_word = chars if stemming_word[-1] == leaf_words: stemming_word[-1] = "" #to remove that letter from the string print stemming_word
Two issues i have are that, each time its finished with the first line of text, it throws an error, saying the index is out of bounds. The problem i believe lies in the if statement because i dont think the for loop is moving to the next item in the line_stop_words list.
Second of all it doesnt actually remove the leaf word from the main string ( Say you have blows, it doesnt remove the s)
Any help or advice you can would be very helpful.
The rest of the code, so you know what im talking about is:
import string i = 0 text_input = "" total_text_input = "" line = [] n = 0 char = "" while i != 1: text_input = raw_input ("") if text_input == ".": i = 1 else: new_char_string = "" for char in text_input: if char in string.punctuation: char = " " new_char_string = new_char_string + char line = line + [new_char_string.lower()] total_text_input = (total_text_input + new_char_string).lower() stop_words = "a","i","it","am","at","on","in","of","to","is","so","too","my","the","and","but","are","very","here","even","from","them","then","than","this","that","though" line_stop_words = [] word_list = "" sent = "" word = "" for sent in line: word_list = string.split(sent) new_string = "" for word in word_list: if word not in stop_words: new_string = new_string + word + ";" new_string = string.split(new_string,";") line_stop_words = line_stop_words +[new_string] | https://www.daniweb.com/programming/software-development/threads/244449/stemming-words-in-python | CC-MAIN-2016-50 | refinedweb | 329 | 67.69 |
Selecting Add | New item, when I add a 'Forms Xaml Page' there is a plus (+) sign next to the file. Hovering the cursor says 'Pending Add'. I can't find any way to 'add' it to the project.
@RobertKamarowski well, if you didnt do that before :
ExposurePage page= new ExposurePage();
Hope that helps.
Mabrouk.
Answers
Can you share a screen shot?
It sounds almost like you have added the page but TFS has marked it to be added to the code repository.
If you look under the View folder you can see what I'm talking about.
@RobertKamarowski
Hi,
Can you explain more your problem?
I'm unable to access the ExposurePage or SessionPage from other classes. I believe Add | New item usually makes a new class part of the project/solution.
@RobertKamarowski well, if you didnt do that before :
ExposurePage page= new ExposurePage();
Hope that helps.
Mabrouk.
The pages/classes ARE in your project. The plus symbol is from TFS saying they are new and will be added to source control the next time you check in your project. The red check means those files are already checked into source control.
Open the XAML or the XAML.cs files and you'll see the namespace to use to reference them.
If you made them in that View folder then the namespace should be:
ESSPhotographyPortable.View.ExposurePage
Looking at that screen shot there is nothing going wrong with your adding of the new page.
I'm going to guess you're just not references the namespaces correctly. Maybe you imported some files from another project.... Maybe you refactored this from a different namespace earlier... Hard to say.
Thank you Mabrouk. | https://forums.xamarin.com/discussion/comment/219522 | CC-MAIN-2020-40 | refinedweb | 280 | 84.98 |
Niraj Jha wrote:ServletContext is whole application based(common for all servlets inside an application), any attribute you want to be available on application level we set it into servletcontext.Although servletconfig is individual to one servlet.Take an example :
App1->four servlets
ServletContext-1
ServletConfig -4
ServletConfig contains individual servlet name ,their parameters declared inside <init-param> of web.xml.For more check ServletContext,ServletConfig
Bear Bibeault wrote:A forum is not a suitable substitute for a tutorial or the documentation. Have you read the javadoc entries for those classes? If so, what specific question do you have regarding them?
sekhar kiran wrote:
Bear Bibeault wrote:A forum is not a suitable substitute for a tutorial or the documentation. Have you read the javadoc entries for those classes? If so, what specific question do you have regarding them?
i know its forum for discussion about to related topics,i know concepts but i have regarding queries based on the concepts which i tried to clarify my doubts and develop in code
Niraj Jha wrote:If you know everything then what is your doubt?
Cesar Loachamin wrote:Hi, as Bear Bibeault said your topic is generic and you must post a more specific question
Cesar Loachamin wrote:Hi, as Bear Bibeault said your topic is generic and you must post a more specific question, but I give you a generic answer and example, The ServletConfig is a interface that have methods to get information releted only to the servlet, as its name, and init parameters. The ServletContext is a more generic interface is as an ApplicationContext, it's generic for the whole application, with this interface you can get the global init parameters set an get attributes that are visible for the all application, and more. I give you an example.
@WebServlet(name = "ServletTest", urlPatterns = { "/TestServlet" }, initParams = { @WebInitParam(name = "servlet-param", value = "some value") })
public class TestServlet extends HttpServlet {
@Override
protected void doGet(HttpServletRequest req, HttpServletResponse resp)
throws ServletException, IOException {
// Get an init parameter generic for the all application
getServletContext().getInitParameter("init-param");
// you can set and get an attribute that is visible for the application
getServletContext().setAttribute("attr1", "some value");
String attr = (String) getServletContext().getAttribute("attr");
//write messages in the server log
getServletContext().log("Log message");
// the name you put in the @WebServlet
String servletName = getServletConfig().getServletName();
// It is the same name as you declared in the @WebInitParam
String servletParam = (String) getServletConfig().getInitParameter(
"servlet-param");
}
}
I hope this help you
Kind regards
Cesar
Ulf Dittmer wrote:See. If you look through the methods in javax.servlet.ServletConfig and javax.servlet.ServletContext it should become clear why they're useful (just ask yourself: if method X wasn't there, how would I accomplish what it does?).
I agree with Bear that working through a Servlet/JSP introduction or tutorial would probably serve you better at this point; you seem to have fundamental questions about these topics than can't be explained in a forum discussion like this.
sekhar kiran wrote: how to see java api like if we type in command javap java.lang.String means it shows full details like the way for jsp and servlets
Amit Ghorpade wrote:
sekhar kiran wrote: how to see java api like if we type in command javap java.lang.String means it shows full details like the way for jsp and servlets
Add the JEE Javadoc to the IDE you are using.
Paul Clapham wrote:You can use javap on any class at all, if that's what you're asking. Just specify your classpath to include the class you wanted to disassemble. I'm not sure why you would want to do that, but then again perhaps that wasn't what you were asking. To tell the truth I couldn't really tell what your question was.
sekhar kiran wrote:it shows error ERROR:Could not find javax.http.ServletConfig
Paul Clapham wrote:It seems kind of pointless to spend time trying to run javap on those interfaces when the API documentation gives you the same information as javap and a lot more.
Amit Ghorpade wrote:
sekhar kiran wrote:it shows error ERROR:Could not find javax.http.ServletConfig
That is because the class was not found in the classpath.
Ulf Dittmer wrote:That has already been answered with "yes", and it has been pointed out to you why it doesn't work the way you are doing it. Please take the time to read and understand what the people who are trying to help you are writing.
sekhar kiran wrote:so whats the correct command to see the details | http://www.coderanch.com/t/610286/Servlets/java/doubts-servlets-concepts | CC-MAIN-2014-52 | refinedweb | 769 | 61.06 |
Recipe
Contents
Description
A library for writing behavioural descriptions in York Lava, inspired by Page and Luk's "Compiling Occam into Field-Programmable Gate Arrays", Oxford Workshop on Field Programmable Logic and Applications, 1991. Features explicit clocking, signals as well as registers, shared procedure calls, and an optimiser. The implementation is short and sweet! Used in the implementation of the Reduceron, a graph reduction machine for Xilinx FPGAs.
To illustrate, consider the implementation of a sequential multiplier using the shift-and-add algorithm.
import Lava import Recipe
We define a state type containing three registers: the two inputs to multiply, and the result of the multiplication.
data Mult n = Mult { a, b, result :: Reg n }
A value of type
Mult n is created by
newMult.
newMult :: N n => New (Mult n) newMult = return Mult `ap` newReg `ap` newReg `ap` newReg
The shift-and-add recipe operates over a value of type
Mult n.
shiftAndAdd s = While (s!b!val =/= 0) $ Seq [ s!a <== s!a!val!shr , s!b <== s!b!val!shl , s!b!val!vhead |> s!result <== s!result!val + s!a!val , Tick ]
shr x = low +> vinit x
shl x = vtail x <+ low
Three remarks are in order:
- The
!operator is flipped application with a high precedence.
infixl 9 ! (!) :: a -> (a -> b) -> b x!f = f x
This gives descriptions an appropriate object-oriented flavour.
- The value of a variable is obtained using the function
val :: Var v => v n -> Word n
Registers (of type
Reg) are an instance of the
Var class.
- The functions
+>and
<+perform cons and snoc operations on vectors,
vheadtakes the head of a vector, and
=/=is generic disequality.
To actually perform a multiplication, the input variables need to be initialised.
multiply x y s = Seq [ s!a <== x, s!b <== y, s!result <== 0, Tick, s!shiftAndAdd ]
example :: Mult N8 -> Recipe example s = s!multiply 5 25
simExample = simRecipe newMult example result
Evaluating
simExample yields
25 :: Word N8.
See
REDUCERON MEMO 23 - included in the package and available at - for further details and
examples.
Synopsis
- data Recipe
- (|>) :: Bit -> Recipe -> Recipe
- call :: Proc -> Recipe
- class Var v where
- (!) :: a -> (a -> b) -> b
- (-->) :: a -> b -> (a, b)
- type New a = RWS Schedule (Bit, Recipe) VarId a
- data Reg n
- newReg :: N n => New (Reg n)
- newRegInit :: N n => Word n -> New (Reg n)
- data Sig n
- newSig :: N n => New (Sig n)
- newSigDef :: N n => Word n -> New (Sig n)
- data Proc
- newProc :: Recipe -> New Proc
- recipe :: New a -> (a -> Recipe) -> Bit -> (a, Bit)
- simRecipe :: Generic b => New a -> (a -> Recipe) -> (a -> b) -> b
Recipe constructs
Mutable variables; named locations that can be read from and assigned to.
Methods
val :: v n -> Word nSource
Return the value of a variable of width n.
(<==) :: v n -> Word n -> RecipeSource
Assign a value to a variable of width n.
Instances
The New monad
Mutable variables: registers and signals
Register variables: assignments to a register come into effect in the clock-cycle after the assignment is performed; the initial value of a register is zero unless otherwise specified.
Instances
Signal variables: assignments to a signal come into effect in the current clock-cycle, but last only for the duration of that clock-cycle; if a signal not assigned to in a clock-cycle then its value will be its default value which is zero unless otherwise specified.
Instances
Shared procedures
newProc :: Recipe -> New ProcSource
Capture a recipe as shared procedure that can be called whenever desired; needless to say, the programmer should avoid parallel calls to the same shared procedure! | http://hackage.haskell.org/package/york-lava-0.2/docs/Recipe.html | CC-MAIN-2016-26 | refinedweb | 593 | 61.67 |
Details
Description
It should be possible to embed Pig calls in a scripting language and let functions defined in the same script available as UDFs.
This is a spin off of which lets users define UDFs in scripting languages.
Issue Links
Activity
- All
- Work Log
- History
- Activity
- Transitions
Thanks Julien. I rebased the patch with the latest trunk and added an option (-greek) in the Main class.
Now one can run a "PIG-Greek" script with following command:
java -cp pig.jar:<jython jar>:<hadoop config dir> org.apache.pig.Main -g <pig-greek script>
or in local mode:
java -cp pig.jar:<jython jar> org.apache.pig.Main -x local -g <pig-greek script>
Thanks Richard!
In the previous patch, the executeScript method on ScriptPigServer returns a list of ExecJobs (one for each store statement in the script). Unfortunately, the order of ExecJobs in the list is indeterminate.
This patch fixes this problem by making the executeScript method return a PigStats object. One then can retrieves the output result by the alias corresponding to store statement.
Here is a example:
P = pig.executeScript(""" A = load '${input}'; ... ... store G into '${output}'; """) output = P.result("G") # an OutputStats object iter = output.iterator() if iter.hasNext(): # do something else: # do something else
Attach the updated test program from Julien.
To run the example:
- tar -xvf pig-greek-test.tar
- java -cp pig.jar:<jython jar> org.apache.pig.Main -x local -g script/tc.py
The -g parameter on the command line should take two parameters, the scripting implementation instance name and the script itself.
That way we can have several scripting implementations.
java -cp pig.jar:<jython jar> org.apache.pig.Main -x local -g jython script/tc.py
case GREEK: { ScriptEngine scriptEngine = ScriptEngine.getInstance(instanceName); scriptEngine.run(new PigServer(pigContext), file); return ReturnCode.SUCCESS; }
The end of loop condition in the script can just test for to_join_n emptiness. It was testing both because it did not know which one was to_join_n.
if (not P.result("to_join_n").iterator().hasNext()):
Attach the test script modified based on Julien's comment. As for commend line option -g, it can also use one parameter (script file name) and let Pig determine the script engine by the file extension.
Using the file extension requires a registration mechanism (or hard coded list) so if it is supported it would be nice to be able to provide the class name of the scripting implementation as well.
I would like to use my own implementation of the scripting engine (let's say javascript) by specifying the class name in the command line.
similar to the mecanism for UDFs inclusion:
Register 'test.py' using org.apache.pig.scripting.jython.JythonScriptEngine as myfuncs;
Alan has posted a proposal that includes embedding Pig in scripting language on Pig wiki:. The proposal is based on the implementation here via a JDBC like compile, bind, run model.
Attaching the initial patch that aims to implement the embedding part of the above proposal.
Notes about the patch:
- Pig executes the top-level Jython statements in the script, no need to write a main() function.
- You can invoke a Jython script from the command line the same way as you invoke a standard Pig script as long as the first line of the script is #! /usr/bin/python.
Example:
java -cp jython.jar:pig.jar myscript.py
- The run method on ScriptEngine returns a Map<String, PigStats>, with one entry for each runtime Pig pipeline. For named pipeline, the map key is the given pipeline name.
- The proposed API is implemented in two classes: ScriptPigServer and PigPipeline.
- The compile method now is a no-op, will be implemented later.
Hi Richard,
Some comments about
PIG-1479_3.patch:
- The ScriptEngine implementations that can be used are still hardwired. As a user I would want to add a parameter to the command line to use my own (adding it to the classpath and providing the class name). For example I'm working on a javascript implementation for Pig-Greek. Currently I have no way of using it without modifying Pig's code.
- I like to not have to define a main() function for the top level code, however using regular expressions to separate functions from the main code seems at high risk of not working in many cases (in JythonScriptEngine.getFunctions(InputStream)). It would be better to trust an actual Python parser or to leave it as is: requiring a main() function.
Thanks Julien.
As for the second comment, there is a third option, namely separating frontend (control flow code) from backend (scripting UDFs) by putting them in different files, and requires control flow writer to explicitly register UDFs in his/her script. For example, in control flow file script.py:
pig.registerUDF("myudfs.py", "mynamespace") # control flow and PIG pipelines that use UDFs defined in myudfs.py
The advantage of this is that only UDF files are shipped to the backend while control flow file (and its dependencies) remains in front end. Obviously, the disadvantage is that you can't put everything in one file.
Attaching patch that addresses above comments:
- One can use --embedded option to specify his/her favorite script engine classname or keyword. For example
java -cp pig.jar:jython.jar org.apache.pig.Main --embedded jython myscript.py
- Implemented the proposed approach of separating frontend control flow script from backend UDF in scripting language. One needs to explicitly register UDF in Pig Latin or embedded Pig.
- Both compile() and bind() methods return objects. So one can write code in jython script like this:
results = pig.compile("<Pig Latin>").bind({param:value, ...}).run()
- One can also run embedded scripts using PigRunner.
This patch makes changes to the public interface PigProgressNotificationListener. It's ok, since it's marked evolving. Do we know how many people are using this and what we'll need to do to mitigate the changes for them?
PigPipeline needs better javadoc comments at the class level. The current javadocs confuse it with the defined Pig class.
Rather than the Pig class detailed in the design doc this patch has ScriptPigServer, which has a slightly different interface. Does this represent a change to the design or is there a yet to be built Pig class?
Do we need two classes BoundPipeline and MultiBoundPipeline? Could we instead have just BoundPipeline, and then for each run method there would be:
public List<PigStats> run() public PigStats runSingle() { if (multijob) throw ... return run().get(0); }
Then run is a valid call whether this is a single or multi-job situation, which means users don't have to write their code differently in situations where they are using both single and multi-job binds. In simple cases where users know they only have one thing bound they can use the simpler runSingle call. Calling runSingle when multiple things are bound would be an error.
We need to mark the availability and stability of the ScriptEngine interface. I suspect it is Public Evolving.
Thanks Alan,
This patch makes changes to the public interface PigProgressNotificationListener. It's ok, since it's marked evolving. Do we know how many people are using this and what we'll need to do to mitigate the changes for them?
This interface is available only in Pig 0.8 which is just ready to release. So not many people are using it. On the other hand it's too late to get into 0.8. The reason for the change is that the embedded script could contain multiple Pig scripts and Pig runtime needs to tell users from which script they get the notification.
PigPipeline needs better javadoc comments at the class level. The current javadocs confuse it with the defined Pig class.
Will do.
Rather than the Pig class detailed in the design doc this patch has ScriptPigServer, which has a slightly different interface. Does this represent a change to the design or is there a yet to be built Pig class?
The patch breaks the Pig class interface into several class: ScriptPigServer to register or define in global scope, to compile a Pig Latin script into a PigPipeline object. PigPipeline binds a set of variables and generates a BoundPigline object which then runs the bound pipeline. Embedded script writers will have access to a ScriptPigServer object called "pig" in the script.
Do we need two classes BoundPipeline and MultiBoundPipeline? Could we instead have just BoundPipeline, and then for each run method there would be: ...
I went back and forth between these two approaches. I'm fine with a single BoundPipeline class with two different run/runSingle method.
We need to mark the availability and stability of the ScriptEngine interface. I suspect it is Public Evolving.
Will do.
Hi Richard,
Thank you for the updated patch.
Follow my comments, all related to usability:
- Pig script invocation
The main invocation mechanism is as follows:
results = pig.compile("<Pig Latin>").bind({param:value, ...}).run()
I was proposing to also bind variables automatically to local variables in the current scope.
results = pig.compile("<Pig Latin>").bindToLocal().run()
or more simply
results = pig.run("<Pig Latin>")
(as implemented in the original submission)
I understand that all languages may not allow that, but all scripting languages I can think of allow it. Only compiled languages strip variable names. This could be optional for the implementation.
If the bind() step is usefull in some situations and is more generic, it is not the most frequent use case.
Implicit binding to local variables is an important feature. As the Pig script is embedded in a particular context, in most use cases the parameters will have the same name than the local variables used to populate them.
The goal is to embed Pig, making the integration seemless. Most cases won't need the indirection to have different parameter names from local variables, making it a burden for the developper.
- Ability to have the main program and the UDFs in the same script
This was the main reason I started this work. The goal was to have everything in one script. The fact that the UDFs are run on the slaves should not force the user to put them in a separate file. The main goal is to have the entire algorithm in the same place without arbitrary separations like this one.
When putting in the balance having a main() function vs not being able to have UDFs in the same file I will definitly choose to have a main() function.
Just embedding Pig without having UDFs in the same file is not very different from running the Pig command line from a script.
another possibility would to have scripts writtent in the following way:
def udf1() ... def udf2() ... def main() ... if __name__ == "__main__": main()
See:
Thanks Julien. How about the following proposal?
Pig script invocation:
Pig will use the bind() method to implicitly bind variables to local variables in the current scope. It'll do an implicit mapping of variables in the host language to parameters in Pig Latin:
results = pig.compile("<Pig Latin>").bind().run()
Ability to have the control flow program and the UDFs in the same script:
I agree that it's good to have everything in one script. Since I can't think of a way to only execute functions in python, I'll go back to use a simple parser to separate functions and control flow program so that UDFs can be registered before the control flow program runs.
A related issue is the python IMPORT statements. Users will be responsible to ship the imported modules to the backend servers. Pig won't automatically resolve the module paths and ship the files to the backend.
+1 to using a fuzzy parser. I agree that being able to have the Python UDFs in the same file is important, and in user reviews others have voiced the same opinion. But forcing Python users to have a main function is going to seem very unnatural to them. So I think the fuzzy parsing is the best compromise.
Based on the feedback, the new patch contains the following changes:
- Support the main program and the UDFs in the same script. However, when mixing jython functions with top level control flow code, the script must use the idiomatic "conditional script" stanza:
def udf1() ... def udf2() ... if __name__ == '__main__': # control flow code
- Support explicit registering scripting UDFs:
Pig.registerUDF("udfs.py", "") # control flow code
- Confirm Pig scripting API to the specification:. The main change is that the scripts now need explicitly import the Pig class:
from org.apache.pig.scripting import Pig ... ... results = Pig.compile("<Pig Latin>").bind().run()
Latest patch looks good. I just have one question. Why do we need the synchronous implementation of PigProgressNotificationListener (SyncProgressNotificationAdaptor)? In what case do we expect Pig to be notifying in parallel? I am assuming that we want to allow user scripts to be multi-threaded, but do we expect multiple threads to use the same PigProgressNotificationListener?
It is for parallel execution of a pipeline. User registers listener through PigRunner API:
public static PigStats run(String[] args, PigProgressNotificationListener listener) ;
It's expected that the same listener is used by all the threads (each executes an instance of the pipeline) in parallel.
I have reviewed the patch.
The latest changes look good to me.
Thanks Richard!
Minor changes to fix a couple of findbugs warnings. Rerun the test-patch:
77 release audit warnings (more than the trunk's current 467 warnings).
Release audit warnings are all html related.
Unit tests passed.
See:
To run the example (assuming javac, jar and java are in your PATH):
This contains a generic base class and a Python implementation.
To implement other scripting languages, extend org.apache.pig.greek.ScriptEngine | https://issues.apache.org/jira/browse/PIG-1479 | CC-MAIN-2017-04 | refinedweb | 2,293 | 66.03 |
Comp). Clojure has other traits as well, including its famous use of software transactional memory (STM) to avoid problems in multithreaded environments.
As a Web developer and a longtime Lisp aficionado, I've been intrigued by the possibility of writing and deploying Web applications written in Clojure. Compojure would appear to be a simple framework for creating Web applications, built on lower-level systems, such as "ring", which handles HTTP requests.
In my last article, I explained how to create a simple Web application using the "lein" system, modify the project.clj configuration file and determine the HTML returned in response to a particular URL pattern ("route"). Here, I try to advance the application somewhat, looking at the things that are typically of interest to Web developers. Even if you don't end up using Clojure or Compojure, I still think you'll learn something from understanding how these systems approach the problem.
Databases and Clojure
Because Clojure is built on the JVM, you can use the same objects in your Clojure program as you would in a Java program. In other words, if you want to connect to a PostgreSQL database, you do so with the same JDBC driver that Java applications do.
Installing the PostgreSQL JDBC driver requires two steps. First, you must download the driver, which is available at. Second, you then must tell the JVM where it can find the classes that are defined by the driver. This is done by setting (or adding to) the CLASSPATH environment variable—that is, put the driver in:
export CLASSPATH=/home/reuven/Downloads:$CLASSPATH
Once you have done that, you can tell your Clojure project that you want
to include the PostgreSQL JDBC driver by adding two elements to the
:dependencies vector within the
defproject macro:
(defproject cjtest "0.1.0-SNAPSHOT" :description "FIXME: write description" :url "" :dependencies [[org.clojure/clojure "1.5.1"] [compojure "1.1.5"] [hiccup "1.0.3"] [org.clojure/java.jdbc "0.2.3"] [postgresql "9.1-901.jdbc4"]] :plugins [[lein-ring "0.8.5"]] :ring {:handler cjtest.handler/app} :profiles {:dev {:dependencies [[ring-mock "0.1.5"]]}})
Now you just need to connect to the database, as well as interact with
it. Assuming you have created a database named "cjtest" on
your
local PostgreSQL server, you can use the built-in Clojure REPL (
lein
repl) to talk to the database. First, you need to load the database
driver and put it into an "sql" namespace that will allow you to work
with the driver:
(require '[clojure.java.jdbc :as sql])
Then, you need to tell Clojure the host, database and port to which you want to connect. You can do this most easily by creating a "db" map to build the query string that PostgreSQL needs:
(def db {:classname "org.postgresql.Driver" :subprotocol "postgresql" :subname (str "//" "localhost" ":" 5432 "/" "cjtest") :user "reuven" :password ""})
With this in place, you now can issue database commands. The easiest
way to do so is to use the
with-connection macro inside the
"sql" namespace, which connects using the driver and then lets
you
issue a command. For example, if you want to create a new table
containing a serial (that is, automatically updated primary key) column
and a text column, you could do the following:
(sql/with-connection db (sql/create-table :foo [:id :serial] [:stuff :text]))
If you then check in psql, you'll see that the table has indeed been
created, using the types you specified. If you want to insert
data, you can do so with the
sql/insert-values function:
(sql/with-connection db (sql/insert-values ↪:foo [:stuff] ["first post"]))
Next, you get back the following map, indicating not only that the data was inserted, but also that it automatically was given an ID by PostgreSQL's sequence object:
{:stuff "first post", :id 1}
What if you want to retrieve all of the data you have inserted? You
can use the
sql/with-query-results function, iterating over the
results with the standard
doseq function:
(sql/with-connection db (sql/with-query-results resultset ["select * from foo"] (doseq [row resultset] (println row))))
Or, if you want only the contents of the "stuff" column, you can use:
(sql/with-connection db (sql/with-query-results resultset ["select * from foo"] (doseq [row resultset] (println (:stuff row)))))
Databases and Compojure
Now that you know how to do basic database operations from the Clojure REPL, you can put some of that code inside your Compojure application. For example, let's say you want to have an appointment calendar. For now, let's assume that there already is a PostgreSQL "appointments" databases defined:
CREATE TABLE Appointments ( id SERIAL, meeting_at TIMESTAMP, meeting_with TEXT, notes TEXT ); INSERT INTO Appointments (meeting_at, meeting_with, notes) VALUES ('2013-july-1 12:00', 'Mom', 'Always good to see Mom');
You'll now want to be able to go to /appointments in your Web application and see the current list of appointments. To do this, you need to add a route to your Web application, such that it'll invoke a function that then goes to the database and retrieves all of those elements.
Before you can do so, you need to load the PostgreSQL JDBC driver into
your Clojure application. You can do this most easily in the
:require
section of your namespace declaration in handler.clj:
(ns cjtest.handler (:use compojure.core) (:require [compojure.handler :as handler] [compojure.route :as route] [clojure.java.jdbc :as sql]))
(I did this manually in the REPL with the "require" function, with slightly different syntax.)
You then include your same definition of "db" in handler.clj, such that your database connection string still will be available.
Then, you add a new line to your
defroutes macro, adding a new
/appointments URL, which will invoke the
list-appointments
function:
(defroutes app-routes (GET "/" [] "Hello World") (GET "/appointments" [] list-appointments) (GET "/fancy/:name" [name] say-hello) (route/resources "/") (route/not-found "Not Found"))
Finally, you define
list-appointments, a function
that executes an SQL
query and then grabs the resulting records and turns them into a
bulleted list in HTML:
(defn list-appointments [req] (html [:h1 "Current appointments"] [:ul (sql/with-connection db (sql/with-query-results rs ["select * from appointments"] (doall (map format-appointment rs))))]))
Remember that in a functional language like Clojure, the idea is to get the results from the database and then process them in some way, handing them off to another function for display (or further processing). The above function produces HTML output, using the Hiccup HTML-generation system. Using Hiccup, you easily can create (as in the above function) an H1 headline, followed by a "ul" list.
The real magic happens in the call to
sql/with-query-results. That
function puts the results of your database call in the
rs variable.
You then can do a number of different things with that resultset. In
this case, let's turn each record into an "li" tag in the final
HTML. The easiest way to do that is to apply a function to each
element of the resultset. In Clojure (as in many functional
languages), you do this with the
map function, which transforms a
collection of items into a new collection of equal length.
What does the
format-appointment function do? As you can imagine, it
turns an appointment record into HTML:
(defn format-appointment [one-appointment] (html [:li (:meeting_at one-appointment) " : " (:meeting_with one-appointment) " (" (:notes one-appointment) ")" ]))
In other words, you'll treat the record as if it were a hash and then retrieve the elements (keys) from that hash using Clojure's shorthand syntax for doing so. You wrap that up into HTML, and then you can display it for the user. The advantage of decomposing your display functionality into two functions is that you now can change the way in which appointments are displayed, without modifying the main function that's called when /appointments is requested by the user.)
Fully in keeping with the
Fully in keeping with the tradition of the brand, the name of the Veneno originates from a legendary fighting bull. Veneno is the name of one of the Tractor Work Lights strongest and most aggressive fighting bulls ever. | http://www.linuxjournal.com/content/compojure?quicktabs_1=1 | CC-MAIN-2016-30 | refinedweb | 1,371 | 57.91 |
in reply to
Re: Re: Re: Re: Re: What should be returned in scalar context? (best practice)
in thread What should be returned in scalar context?
Nevermind. You're not getting it. If you document only what your code does, you might as well just duplicate the source.
You say that my argument is "overly academic with no practical basis". Perhaps it is. It's not something I have considered. I'm just someone who hates documentation that is wrong and vague where it could be correct and clear if the author cared.
Technically and linguistically, you're wrong if you say that a Perl non-lvalue subroutine returns an array. Write one sentence more and your documentation is correct and clear.
Instead of
Returns an array of foos.
just write
Returns a list of foos. (In scalar context, returns the number of foos.)
and stop making people guess what you mean.
If you insist on redefining the word 'array', go ahead. But don't forget to document your meaning of the word.
I am perfectly aware that functions always return a list.
Does that mean I'm going to stop documenting that a function like sub f { return 0 } "returns a scalar?" Somehow I doubt you'd take issue with that even though it isn't technically true. That function is, afterall, returning a list. The list simply has one element.
No, it is perfectly acceptable to call a one-element list a scalar in this context. However, it is not very customary to mention that the single value you return is a scalar. Usually, we talk about "a number" or "a string". "Returns true", "returns the integer portion", "returns a string", "returns the number of". In fact, no function in perlfunc is documented to return a scalar.
However, it is customary to indicate that a collection of numbers or strings is returned by using the word "list", because such a collection is called a "list" in Perl jargon.
By the way, why do you think my argument has no practical basis? That's nonsense. Consistency with existing jargon and documentation is very practical.
Your argument, if applied evenly, would require the abolishment of the phrase "returns a scalar" as well.
Saying that a function returns a scalar isn't even wrong, even if the function returns 15 scalars, it still also returns a scalar. It would be misleading, though, so I'd avoid it. But as I said, it is not common to say that a function returns any scalar at all. We usually describe what the scalars are, since we assume every Perl coder knows that normal (non-lvalue) subs can only return scalars.
Fortunately, we won't have to do that, because your position is completely untenable.
I'm wasting my time. You're even more stubborn than I am. Even when presented with facts and prior art, you refuse to see that what you do is different from what others have agreed on.
By saying your function "returns a scalar" or "returns an array" or "returns a list" you are able to describe exactly how code the is written without trying to describe how Perl works.
A sub returns a foo or a list of bars. You don't document that a function returns a scalar or just a list. But if you do, that's acceptable. Documenting that a function returns an array is *not acceptable* if it does not return an array. Is that so hard? Don't say your function returns 2 if does not return 2. And don't say your function returns 1+2 if it has return 1+2 in it. It returns 3. Stick to the facts and keep it simple (but never make it inaccurate). This is technical documentation, not prose.
The reason it makes sense to say those things is that context is imposed before the value is returned.
Actually, it depends on how you look at it. Context is propagated to every single return statement, and to the last statement of the sub.
When you return @ret; the list returned is composed of elements generated by evaluating @ret in the required context. If it is in scalar context, you get the number of elements of @ret. If it is in list context, you get all of the elements of @ret. Likewise, if you return (33, 42); the list returned is created by evaluating (33, 42) in scalar or list context as necessary.
That is correct. But even if you have return @foo, @foo, the array, is never returned. It cannot be referenced or mutated. The thing that is returned is not an array. The function does not return an array, ever.
It's very simple. You can call something an array if it starts with @. Because lvalue subs are reasonably new, some work-around syntax is needed to actually use an lvalue sub as an array: @{\foo} instead of just foo, and that too proves my point that only if it starts with @, it is an array.
Note that I'm not saying that everything that does start with @ is an array. It could be a slice.
This rule also means that in $foo[0], $foo is not the array. @foo is the array. Another typical aspect of arrays is that you can use them in array context, like the context imposed on the first argument of push. If you can't push @{\foo()}, "bar";, foo did not return a regular array. If it did return an array and it doesn't work, that is because it was made readonly or magical (tied).
Obey gravity, it's the law.
In other words, there is always a disconnect between what you return and what Perl returns.
Better described as "there can be a disconnect between syntax and semantics". And API documentation is about explaining what happens, not about how you make that happen.
$i++; # increment $i # silly.
[download]
Perl is already well-documented.
And it is time to document your own code well too. Perl set the example, now all you have to do is follow that example. Perl is well known for it's high quality documentation, but wouldn't have been if perlfunc did crazy things like tell people that some function returns an array.
If keys returned an array, not only would it return a list of elements in list context and a number of elements in scalar context, it would also be spliceable. And pushable, and shiftable. And the best of all: it would be referable. Yes, that is silly. But that is what returning an array is.
Juerd
# { site => 'juerd.nl', plp_site => 'plp.juerd.nl', do_not_use => 'spamtrap' }
This is my last node in this discussion. I am unable to convince you. You would defy gravity if you could. I just hope other people who read this thread did learn from it, and don't say that functions return arrays.
Tron
Wargames
Hackers (boo!)
The Net
Antitrust (gahhh!)
Electric dreams (yikes!)
Office Space
Jurassic Park
2001: A Space Odyssey
None of the above, please specify
Results (118 votes),
past polls | http://www.perlmonks.org/index.pl?node_id=316257 | CC-MAIN-2014-35 | refinedweb | 1,188 | 76.11 |
Dne 26.10.2010 15:57, Alasdair G Kergon napsal(a): > On Tue, Oct 26, 2010 at 02:59:21PM +0200, Zdenek Kabelac wrote: >> Updated patchset for NULL pointer dereferences issues reported by clang. >> >> Unlike the first version - this time less aggresive solution is used. >> INTERNAL_ERRORs are reported in these moments (if they would ever happen), >> and the execution path aborts when such conditions are met. >> Previous version was rather ignoring these paths and could lead to >> unwanted execution of other code parts. > > Well the ones I've looked at here seem to be more about dealing with > shortcomings in the static analysis code rather than fixing real bugs. > Some of them can never be triggered within current LVM code. Static analyzer is currently incapable to model data structure behavior to understand, that some settings can never happen and sometimes it creates very complex code path to model NULL pointer at the end. (Also instrumentation nonnull would be handy here - but it's long term goal) However my small patches here really just try to clean warning - the price for checks seems to be quite low and we do not need to look into analyzer output again and again. We may also put them into #ifdef __clang__ #endif section to avoid any runtime overheads - but I don't like spreading such ifdefs everywhere. I can also keep these patches in my private branch - to not be always bothered with same error. For now I did not want to spend too much time on this so I've rather fixed easily and quickly what I've considered to be even worth to look at. Of course deeper analysis here will require some time - so - placing them to my low-prio background queue.... Zdenek | https://www.redhat.com/archives/lvm-devel/2010-October/msg00356.html | CC-MAIN-2015-14 | refinedweb | 292 | 64.75 |
I have a raspberry pi 3b+, and I am trying to communicated with a TPI synthesizer model 1001-B. I want to use Python and specifically the pyserial module to communicate with a serial port – i.e. the synthesizer, which is connected via USB to the raspberry pi. The synthesizer specifies some particular setting for the python code:
“Internally the communication USB port is connected to an FTDI chip that acts as a USB to UART translator. Therefore, the user’s program accesses the unit as if it was a COM port. The driver installed in the computer maps the computer’s USB port to a virtual COM port. The user’s program must set the COM port parameters to the following… • Baud rate: 3,000,000 (3 Mbit/sec) • Data bits: 8 • Stop bits: 1 • Parity: None • Handshake: Request to send”
The code I tried was:
import serial
ser = serial.Serial( port=’/dev/ttyUSB0′, baudrate=3000000, parity=serial.PARITY_NONE, stopbits=serial.STOPBITS_ONE, bytesize=serial.EIGHTBITS )
I then ran the following code, hoping to write to the device and enable user control on the device:
ser.write(chr(0xAA) + chr(0x55) + chr(0x00) + chr(0x02) + chr(0x08) + chr(0x01) + chr(0xF4))
I chose this particular set of strings because the user manual specified it. “The full packet including the header and checksum is… 0xAA, 0x55, 0x00, 0x02, 0x08, 0x01, 0xF4” where 0xAA, 0x55, 0x00, 0x02 is the header, 0x08, 0x01 is the body, and 0xF4 is the checksum.
When I did, the device did not write anything back. Any ideas? Can I use pyserial with Linux? Help!
- 1Ah let me see. (1) So your TPI synthesizer is using a USB to UART/serial/TTL cable/adapter (never mind FTDI, which is not relevant here) to talk to the other side. (2) The other side, in your case Rpi, is of course using serial, either the UART RxD, TxD pins, or also a USB to serial cable to the other side. (3) In other words, TPIS has a USB to serial cable, Rpi also has a serial cable (on board serial, or also USB to serial), and the two serial cables join end to end (Importnat: Rx/TX wires “crossing over”) Please let know if I see the picture correctly, before I move on. – tlfong01 3 hours ago
- 1Also can you give me the web link to the AT command set, or the user manual that describes the serial spec 3kbd8N1, the AT commands and the package spec: header, body, checksum etc – tlfong01 3 hours ago
- I casually googled and found the following, but no luck with the AT command set:(1) Review: TPI-1001-B RF Synthesizer interferencetechnology.com/review-tpi-1001-b-rf-synthesizer (2) TPI-1001-B RF Synthesizer Features rf-consultant.com/products/tpi-1001-b-signal-generator (3) TPI-1001-B RF Synthesizer Features YouTube rf-consultant.com/products/tpi-1001-b-signal-generator (4) AliBaBa TPI-1001-B – $450 alibaba.com/product-detail/TPI-1001-B_50021319321.html / to continue, … – tlfong01 2 hours ago
- (5) TPI-1001A User Manual – Trinity Power Inductry studylib.net/doc/18665149/tpi-1001-a-user-manual—rf – tlfong01 2 hours ago
- BTW, I feel jealous that you have an US$450, 3.5MH to 4.4GHz sig gen, I am playing with 2.4GHz right now and I also have a couple of UART controlled ADxxxx sig gen chips and module, but all are low frequency (2.4GHz or lower). I also have a UART controlled FM receiver. Perhaps the time has come for me to invest US$450 for a new toy 🙂 Reference to my 2.4GHz project chat.stackexchange.com/rooms/103645/… – tlfong01 2 hours ago
- UART controlled devices using AT commands are very similar, except of course in the details of the command format. My usual troubleshooting suggestion is forget about Rpi and python, just use Win10 RealTerm to do manual AT command input, and only start python programming AFTER you find manual testing OK. More references: (1) UART FPV raspberrypi.stackexchange.com/questions/105223/… – tlfong01 58 secs ago Edit
- (2) UART Projector (Read my commenbts) raspberrypi.stackexchange.com/questions/105405/… (3) UART XY Sig Gen raspberrypi.stackexchange.com/questions/104779/… (4) ADS9850 ICL9338 Sig Gen raspberrypi.stackexchange.com/questions/96423/… – tlfong01 32 secs ago Edit
Categories: Uncategorized | https://tlfong01.blog/2020/02/14/uart-tpi-sig-gen/ | CC-MAIN-2020-24 | refinedweb | 720 | 54.63 |
On Tuesday 10 January 2017 16:55, Ethan Furman wrote: > On 01/09/2017 09:18 PM, Steven D'Aprano wrote: > >> The docs say that enums can be iterated over, but it isn't clear to me >> whether they are iterated over in definition order or value order. >> >> If I have: >> >> class MarxBros(Enum): >> GROUCHO = 999 >> CHICO = 5 >> HARPO = 11 >> ZEPPO = auto() >> GUMMO = -1 >> >> GROUCHO, CHICO, HARPO, ZEPPO, GUMMO = list(MarxBros) > > In Python 3 it is always definition order. I only care about Python 3 for this. Did I miss something in the docs, or should this be added? [...] >> On that related note, how would people feel about a method that injects >> enums into the given namespace? [...] > Or you can do globals().update(MarxBros.__members__). Sweet! -- Steven "Ever since I learned about confirmation bias, I've been seeing it everywhere." - Jon Ronson | https://mail.python.org/pipermail/python-list/2017-January/718472.html | CC-MAIN-2019-30 | refinedweb | 142 | 69.52 |
.
Step 1: Open Your Text Editor of Choice and Let's Get Started!
First, we'll import some needed modules
import sys
from datetime import datetime
These aren't all the modules we need, but we'll import the rest later on.
Next, we'll set up some user input that we need to begin the scan
We place these within a try so that we can add an exception, this exception will be in case the user decides that they don't want to continue with the scan.
Now, we'll add that exception we talked about
Now, on to the actual scanning portion of the script!
We'll start this by telling the user we've started scanning, starting the clock for the total scan duration, importing the needed scapy modules, and beginning the scan.
We simply print that we've started the scan, and set the start-time variable equal to the current time, datetime.now()
Then, we imported the necessary modules from scapy
from scapy.all import srp,Ether,ARP,conf
We set the conf.verb variable equal to 0
conf.verb = 0
Now we start the scanning, we set two variables, ans and unans equal to srp(Ether(dst="ff:ff:ff:ff:ff:ff")/ARP(pdst = ips), timeout = 2, iface = interface, inter = 0.1). the series of f's and colons is the default broadcast address, we set the pdst variable equal to the range of IP address we prompted the user for in the beginning, timeout is set to two, iface gets set to the interface to got from the user before, and the inter variable is set to 0.1, this helps to avoid skipping packets.
Now that we have this information, all we need to do is go through it and present it to the user!
We print MAC - IP so that we can set up a format for the results to be displayed, we then use a for loop to iterate through the results and print the values, once we're through with that, we stop the clock for the scan duration by declaring another variable equal to datetime.now(). We find the total scan time by subtracting the two and print the result.
Step 2: Test It Out!
Now that we have our script, let's test it out!
We'll be scanning for my virtual system running Windows XP
Here we can see that we'll be looking for the IP address of 192.168.29.132.
Let's navigate to our script and get started!
We start our script and we will be prompted for some required information
In this instance I'll be entering vmnet8 as my desired interface
I'll be scanning for the entire range of IP addresses, from 192.168.29.1 to 192.168.29.255
We will be prompted that the scanning has started, now we wait for the scan to finished and print our results
We see here that our scan was successful! We were able to confirm the Windows system presence on the network!
Step 3: Feedback!
Let me know what you think, post any questions you might have to the comments and I'm sure they'll be:
6 Comments
Thanks for the tutorial!
Cool. +1
Cool code. Can you perhaps send the code through paste bin?
Thanks. keep up the tutorials
Here it is
Thanks!
-Defalt
Nice code/tutorial :)
Thank you for the code.
I found this error:
File "<string>", line 1, in bind
socket.error: Errno 19 No such device
The given ip is in my network but still it is giving this error.
How do I solve this?
Share Your Thoughts | https://null-byte.wonderhowto.com/how-to/build-arp-scanner-using-scapy-and-python-0162731/ | CC-MAIN-2022-05 | refinedweb | 616 | 79.5 |
AWS SDK for Ruby
The official AWS SDK for Ruby.
Installation
Version 1 of the AWS SDK for Ruby is available on rubygems.org as two gems:
aws-sdk-v1
aws-sdk
This project uses semantic versioning. If you are using the
aws-sdk gem, we strongly recommend you specify a version constraint in
your Gemfile. Version 2 of the Ruby SDK will not be backwards compatible
with version 1.
# version constraint gem 'aws-sdk', '< 2' # or use the v1 gem gem 'aws-sdk-v1'
If you use the
aws-sdk-v1 gem, you may also load the v2 Ruby SDK in the
same process; The v2 Ruby SDK uses a different namespace, making this possible.
# when the v2 SDK ships, you will be able to do the following gem 'aws-sdk', '~> 2.0' gem 'aws-sdk-v1'
If you are currently using v1 of
aws-sdk and you update to
aws-sdk-v1, you
may need to change how your require the Ruby SDK:
require 'aws-sdk-v1' # not 'aws-sdk'
If you are using a version of Ruby older than 1.9, you may encounter problems with Nokogiri. The authors dropped support for Ruby 1.8.x in Nokogiri 1.6. To use aws-sdk, you'll also have to install or specify a version of Nokogiri prior to 1.6, like this:
gem 'nokogiri', '~> 1.5.0'
Basic Configuration
You need to provide your AWS security credentials and choose a default region.
AWS.config(access_key_id: '...', secret_access_key: '...', region: 'us-west-2')
You can also specify these values via
ENV:
export AWS_ACCESS_KEY_ID='...' export AWS_SECRET_ACCESS_KEY='...' export AWS_REGION='us-west-2'
Basic Usage
Each service provides a service interface and a client.
ec2 = AWS.ec2 #=> AWS::EC2 ec2.client #=> AWS::EC2::Client
The client provides one method for each API operation. The client methods accept a hash of request params and return a response with a hash of response data. The service interfaces provide a higher level abstration built using the client.
Example: list instance tags using a client
resp = ec2.client.(filters: [{ name: "resource-id", values: ["i-12345678"] }]) resp[:tag_set].first #=> {:resource_id=>"i-12345678", :resource_type=>"instance", :key=>"role", :value=>"web"}
Example: list instance tags using the AWS::EC2 higher level interface
ec2.instances['i-12345678']..to_h #=> {"role"=>"web"}
See the API Documentation for more examples.
Testing
All HTTP requests to live services can be globally mocked (e.g. from within the
spec_helper.rb file):
AWS.stub!
Links of Interest
Supported Services
The SDK currently supports the following services:
License
This SDK is distributed under the Apache License, Version 2.0.
Copyright 2012.. | http://www.rubydoc.info/github/aws/aws-sdk-ruby/master/frames | CC-MAIN-2014-52 | refinedweb | 432 | 58.08 |
Stereo ray
This scripts provides the stereo_ray command, which saves two ray traced images in PNG format. The left image is rotated +3 degrees and the right image is rotated -3 degrees.
Note that the end result is almost identical to using "walleye" stereo mode, which might be more convenient and is readily available in PyMOL.
Contents
Usage
stereo_ray filename [, width [, height ]]
Example
import stereo_ray stereo_ray output, 1000, 600
This will create two images, "output_l.png" and "output_r.png". Just paste the two images next to each other (in some image editing program) and you're done.
Manually
To obtain the same result without the script, use the ray and the png command like this:
ray angle=+3 png left-image.png ray angle=-3 png right-image.png
Obtain almost the same result using the stereo command:
stereo walleye png combined.png, ray=1 | https://pymolwiki.org/index.php?title=Stereo_ray&oldid=10662 | CC-MAIN-2019-47 | refinedweb | 145 | 64.81 |
NIS+:
"Differences Between NIS+ Tables and NIS Maps"
"Use of Custom NIS+ Tables"
"Connections Between Tables"
NIS+ tables differ from NIS maps in many ways, but two of those differences should be kept in mind during your namespace design:
NIS+ uses fewer standard tables than NIS.
NIS+ tables interoperate with /etc files differently than NIS maps did in the SunOS 4.x releases.
Review the 17 standard NIS+ tables to make sure they suit the needs of your site. They are listed in Table 2-5. Table 2-6 lists the correspondences between NIS maps and NIS+ tables.
Do not worry about synchronizing Solaris Naming
Setup and Configuration Guide.) Key-value tables have
two columns, with the first column being the key and the second column being
the value. Therefore, when you update any information, such as host information,
you need only update it in one place, such as the hosts table. You no longer
have to worry about keeping that information consistent across related maps.
Note the new names of the automounter tables:
auto_home (old name:
auto.home)
auto_master
(old name:
auto.master)
The dots were changed to underscores because NIS+ uses dots to separate directories. Dots in a table name can cause NIS+ to mistranslate names. For the same reason, machine names cannot contain any dots. You must change any machine name that contains a dot to something else. For example, a machine named sales.alpha is not allowed. You could. See Table 2-5.Table 2-5 NIS+ Tables
Table 2-6 Correspondences Between NIS Maps and NIS+ TablesTable 2-6 Correspondences Between NIS Maps and NIS+ Tables
NIS+ has one new table for which there is no corresponding NIS table:
sendmailvars. The
sendmailvars table stores the mail domain
used by sendmail.
The manner in which NIS and other network information services interacted with /etc files in the SunOS 4.x environment was controlled by the /etc files using the +/- syntax. How NIS+, NIS, DNS and other network information services interact with /etc files in the Solaris operating environment is determined by the name service switch. The switch is a configuration file, /etc/nsswitch.conf, located on every Solaris operating environment client. It specifies the sources of information for that client: /etc files, DNS zone files (hosts only), NIS maps, or NIS+ tables. The nsswitch.conf configuration file of NIS+ clients resembles the simplified version in Example 2-1.
In other words, for most types of information, the source is first an NIS+ table, then an /etc file. For the passwd and group entries, the sources can either be network files or from /etc files and NIS+ tables as indicated by +/- entries in the files.
You can select from three versions of the switch-configuration file or you can create your own. For instructions, see Solaris Naming Administration Guide.. | http://docs.oracle.com/cd/E19455-01/806-2904/6jc3d07gk/index.html | CC-MAIN-2013-48 | refinedweb | 472 | 65.12 |
Webservices in Java with JAX-WS
Webservice is an application that serves data to other applications in the network. Webservices are independent of programming language that serves or receives data. Main data format is XML and can be transported with regular web protocols, such as HTTP (SOAP).
WSDL (Web Services Description Language) is XML formated description of web service and what is has to offer. It includes abstract definitions of operations and messages as well as protocols and format of messages.
In the following example we will use JAX-WS which is
already supported in JDK 1.6.
JAX-WS stands for Java API for XML - Web Services. Java specifies some annotations which simplify implementation of web services (@WebService, @WebMethod, @WebParam, @SOAPBinding...)
MyService
First we need to create a class and methods that our webservice will offer. Our service will return a greeting (String), a sum of two numbers (Integer) or an object (Person). A Person is a POJO containing only String name and getter and setter.
import javax.jws.WebMethod;
import javax.jws.WebParam;
import javax.jws.WebService;
@WebService
public class MyService {
@WebMethod
public String sayHello(String name) {
System.out.println("sayHello()");
return "Hi there " + name;
}
@WebMethod
public Integer add(Integer a, Integer b) {
System.out.println("add()");
return a+b;
}
@WebMethod
public Person getPerson(String name) {
System.out.println("getPerson()");
Person p = new Person();
p.setName(name);
return p;
}
}
Now we need to publish the service to the web. This class will imitate the server offering a service.
public class WsTest {
public static void main(String[] args) {
MyService ms = new MyService();
Endpoint.publish("", ms);
}
}
WSDL document is generated automatically during runtime. To see WSDL open web browser and go to:
Here is my WSDL file (click me).
Create web service client with wsimport
OK, so far we created a service and published it on the web (keep WsTest running).
As implied at the beginning, the WSDL file contains all abstract definitions of operations in web service. Based on these definitions we know exactly what to send to the service and what we can expect as response.
JDK includes a tool called wsimport. It is used to read WSDL file and generate Java classes which we will use to make a client application.
wsimport -p my.project.webservices.generated -keep
where:
- -p defines package names of created classes
- -d defines directory where classes will be created (here omitted; use current dir)
- -keep generated .java files as well as .class files
- the last argument is URL of WSDL
Include the generated classes into web service client project. Implement the WsClient:
public class WsClient {
public static void main(String... a) {
MyServiceService mss = new MyServiceService(); MyService s = mss.getMyServicePort();
System.out.println(s.sayHello(("Fred")));
System.out.println(s.add(2, 3));
Person person = s.getPerson("Lucy");
System.out.println(person.getName());
}
}
Run the WsClient application. You should see the results:
Fred
5
Lucy
Congratulations! You just created a web service!
Alternative clients
As alternative to our Java WsClient you can create a client in any other programming language (eg. C#). All you need is WSDL.
soapUI is far the best, easy to use application with nice GUI to test web services (did I just say nice to a Java GUI?). Just import WSDL. Search for it on the web. | http://www.matjazcerkvenik.si/developer/java-webservices.php | CC-MAIN-2019-09 | refinedweb | 548 | 50.94 |
pm parameter = pm.Exponential("poisson_param", 1) data_generator = pm variable's114d47690>]) Parents of `data_generator`: {'mu': <pymc.distributions.Exponential 'poisson_param' at 0x114d47290>} Children of `data_generator`: set([<pymc.PyMCObjects.Deterministic '(data_generator_add_1)' at 0x114d47c90>])
Of course a child can have more than one parent, and a parent can have many children.
print "parameter.value =", parameter.value print "data_generator.value =", data_generator.value print "data_plus_one.value =", data_plus_one.value
parameter.value = 0.984449056031 data_generator.value = 0 data_plus_one.value = 1 = pm.DiscreteUniform("discrete_uni_var", 0, 4)
where 0, 4 are the
DiscreteUniform-specific lower and upper bound on the random variable. The PyMC docs contain the specific parameters for stochastic variables. (Or use
object??, for example
pm.DiscreteUniform??
size keyword in the call to a
Stochastic variable creates multivariate array of (independent) stochastic variables. The array behaves like a Numpy array when used like one, and references to its
value attribute return Numpy arrays.
The
size, size=N)
random()¶
We can also call on a stochastic variable's
random() method, which (given the parent values) will generate a new, random value. Below we demonstrate this using the texting example from the previous chapter.
lambda_1 = pm.Exponential("lambda_1", 1) # prior on first behaviour lambda_2 = pm.Exponential("lambda_2", 1) # prior on second behaviour tau = pm.666 lambda_2.value = 1.741 tau.value = 3.000 After calling random() on the variables... lambda_1.value = 1.800 lambda_2.value = 0.190 tau.value = 2:
@pm:
And in PyMC code:
import numpy as np n_data_points = 5 # in CH1 we had ~70 data points @pm.deterministic def lambda_(tau=tau, lambda_1=lambda_1, lambda_2=lambda_2): out = np.zeros(n_data_points) out[:tau] = lambda_1 # lambda before tau is lambda1 out[tau:] = lambda_2 # lambda after tau is lambda:
@pm.deterministic def some_deterministic(stoch=some_stochastic_var): return stoch.value**2
will return an
AttributeError detailing that
stoch does not have a
value attribute. It simply needs to be
stoch**2. During the learning phase, it's the variable's
value that is repeatedly passed in, not the actual variable.
Notice in the creation of the deterministic function we added defaults to each variable used in the function. This is a necessary step, and all variables must have default values.
%matplotlib inline from IPython.core.pylabtools import figsize from matplotlib import pyplot as plt figsize(12.5, 4) samples = [lambda_1.random() = pm = pm.Poisson("obs", lambda_, value=data, observed=True) print obs.value
[10 25 15 20 35]
We wrap all the created variables into a
pm.Model class. With this
Model class, we can analyze the variables as a single unit. This is an optional step, as the fitting algorithms can be sent an array of the variables rather than a
Model class. I may or may not use this class in future examples ;)
model = pm.Model([obs, lambda_, lambda_1, lambda_2, tau])
A good starting point in Bayesian modeling is to think about how your data might have been generated. Put latter )
PyM = pm.rdiscrete_uniform(0, 80) print tau
53
2. Draw $\lambda_1$ and $\lambda_2$ from an $\text{Exp}(\alpha)$ distribution:
alpha = 1. / 20. lambda_1, lambda_2 = pm.rexponential(alpha, 2) print lambda_1, lambda_2
22.3473316377 22.00741773
3. For days before $\tau$, represent the user's received SMS count by sampling from $\text{Poi}(\lambda_1)$, and sample from $\text{Poi}(\lambda_2)$ for days after $\tau$. For example:
data = np.r_[pm.rpoisson(lambda_1, tau), pm.rpoisson(lambda_2,'s engine is designed to find good parameters, $\lambda_i, \tau$, that maximize this probability.
The ability to generate artificial datasets is an interesting side effect of our modeling, and we will see that this ability is a very important method of Bayesian inference. We produce a few more datasets below:
def plot_artificial_sms_dataset(): tau = pm.rdiscrete_uniform(0, 80) alpha = 1. / 20. lambda_1, lambda_2 = pm.rexponential(alpha, 2) data = np.r_[pm.rpoisson(lambda_1, tau), pm.rpoisson(lambda_2, 80 - tau)] plt.bar(np.arange(80), data, color="#348ABD") plt.bar(tau - 1, data[tau - 1], color="r", label="user behaviour changed") plt.xlim(0, 80) figsize(12.5, 5) plt.suptitle("More examples of artificial datasets", fontsize=14) for i in range(1, 5): plt.subplot(4, 1, i) set up as pm # The parameters are the bounds of the Uniform. p = pm.Uniform('p', lower=0, upper=1) = pm.rbernoulli(p_true, N) print occurrences # Remember: Python treats True == 1, and False == 0 print occurrences.sum()
[False False False False ..., False False False False] 86
The observed frequency is:
# Occurrences.mean is equal to n/N. print "What is the observed frequency in Group A? %.4f" % occurrences.mean() print "Does this equal the true frequency? %s" % (occurrences.mean() == p_true)
What is the observed frequency in Group A? 0.0573 Does this equal the true frequency? False
We combine the observations into the PyMC
observed variable, and run our inference algorithm:
# include the observations, which are Bernoulli obs = pm.Bernoulli("obs", p, value=occurrences, observed=True) # To be explained in chapter 3 mcmc = pm.MCMC([p, obs]) mcmc.sample(18000, 1000)
[-----------------100%-----------------] 18000 of 18000 complete in 1.0 sec
We plot the posterior distribution of the unknown $p_A$ below:
figsize(12.5, 4) plt.title("Posterior distribution of $p_A$, the true effectiveness of site A") plt.vlines(p_true, 0, 90, linestyle="--", label="true $p_A$ (unknown)") plt.hist(mcmc.trace("p")[:], bins=25, histtype="stepfilled", normed=True) plt.legend()
<matplotlib.legend.Legend at 0x1158e0e90>
Our posterior distribution puts most weight near the true value of $p_A$, but also some weights in the tails. This is a measure of how uncertain we should be, given our observations. Try changing the number of observations,
N, and observe how the posterior distribution changes.
A similar analysis's deterministic variables. (We'll assume for this exercise that $p_B = 0.04$, so $\text{delta} = 0.01$, $N_B = 750$ (significantly less than $N_A$) and we will simulate site B's data like we did for site A's data )
import pymc as pm figsize(12, 4) # these two quantities are unknown to us. true_p_A = 0.05 true_p_B = 0.04 # notice the unequal sample sizes -- no problem in Bayesian analysis. N_A = 1500 N_B = 750 # generate some observations observations_A = pm.rbernoulli(true_p_A, N_A) observations_B = pm.rbernoulli(true_p_B, N_B) print "Obs from Site A: ", observations_A[:30].astype(int), "..." print "Obs from Site B: ", observations_B[:30].astype(int), "..."
Obs from Site A: [0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0] ... Obs from Site B: [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] ...
print observations_A.mean() print observations_B.mean()
0.0533333333333 0.04
# Set up the pymc model. Again assume Uniform priors for p_A and p_B. p_A = pm.Uniform("p_A", 0, 1) p_B = pm.Uniform("p_B", 0, 1) # Define the deterministic delta function. This is our unknown of interest. @pm.deterministic def delta(p_A=p_A, p_B=p_B): return p_A - p_B # Set of observations, in this case we have two observation datasets. obs_A = pm.Bernoulli("obs_A", p_A, value=observations_A, observed=True) obs_B = pm.Bernoulli("obs_B", p_B, value=observations_B, observed=True) # To be explained in chapter 3. mcmc = pm.MCMC([p_A, p_B, delta, obs_A, obs_B]) mcmc.sample(20000, 1000)
[-----------------100%-----------------] 20000 of 20000 complete in 1.9 sec
Below we plot the posterior distributions for the three unknowns:
p_A_samples = mcmc.trace("p_A")[:] p_B_samples = mcmc.trace("p_B")[:] delta_samples = mcmc" % \ (delta_samples < 0).mean() print "Probability site A is BETTER than site B: %.3f" % \ (delta_samples > 0).mean()
Probability site A is WORSE than site B: 0.088 Probability site A is BETTER than site B: 0.912$), and $p$ is the probability of a single event. as pm N = 100 p = pm.Uniform("freq_cheating", 0, 1)
Again, thinking of our data-generation model, we assign Bernoulli random variables to the 100 students: 1 implies they cheated and 0 implies they did not.
true_answers = pm = pm.Bernoulli("first_flips", 0.5, size=N) print first_coin_flips.value
[False False False True False False False True False True True True False False True True True True False False True False False False False False False True False True True True False True True True True True False False True True True True False True False False True False False False True True True True True False True False False False True False False True True False True True True False False True True True False True False True False False True False False True True False False True False True True True True True True False False True]
Although not everyone flips a second time, we can still model the possible realization of second coin-flips:
second_coin_flips = pm.Bernoulli("second_flips", 0.5, size=N)
Using these variables, we can return a possible realization of the observed proportion of "Yes" responses. We do this using a PyMC
deterministic variable:
@pm.deterministic def observed_proportion(t_a=true_answers, fc=first_coin_flips, sc=second_coin_flips): observed = fc * t_a + (1 - fc) * sc return observed.sum() /.value
0.29999999999999999 expect to see approximately 3/4 of all responses be "Yes".
The researchers observe a Binomial random variable, with
N = 100 and
p = observed_proportion with
value = 35:
X = 35 observations = pm.Binomial("obs", N, observed_proportion, observed=True, value=X)
Below we add all the variables of interest to a
Model container and run our black-box algorithm over the model.
model = pm.Model([p, true_answers, first_coin_flips, second_coin_flips, observed_proportion, observations]) # To be explained in Chapter 3! mcmc = pm.MCMC(model) mcmc.sample(40000, 15000)
[-----------------100%-----------------] 40000 of 40000 complete in 10, we can create a deterministic function to evaluate the probability of responding "Yes", given $p$:
p = pm.Uniform("freq_cheating", 0, 1) @pm.deterministic def p_skewed(p=p): return 0.5 * p + 0.25.
yes_responses = pm.Binomial("number_cheaters", 100, p_skewed, value=35, observed=True)
Below we add all the variables of interest to a
Model container and run our black-box algorithm over the model.
model = pm.Model([yes_responses, p_skewed, p]) # To Be Explained in Chapter 3! mcmc = pm.MCMC(model) mcmc.sample(25000, 2500)
[-----------------100%-----------------] 25000 of 25000 complete in 1();
Lambdaclass¶
Sometimes writing a deterministic function using the
@pm = pm.Normal("coefficients", 0, size=(N, 1)) x = np.random.randn((N, 1)) linear_combination = pm] = pm.Exponential('x_%i' % i, (i + 1) ** 2)
The remainder of this chapter examines some practical examples of PyMC and PyM temperature .]]
Text(0.5,1,u'Defects of the Space Shuttle O-Rings vs temperature').title("Logistic functon plotted for several value of $\\beta$ parameter", fontsize=14).title("Logistic functon with bias, plotted for several value of $\\alpha$ bias parameter", fontsize=14) plt.legend(loc="lower left");
Adding a constant term $\alpha$ amounts to shifting the curve left or right (hence why it is called a bias).
Let's start modeling this in PyM. / np.sqrt(_tau)), label="$\mu = %d,\;\\tau = %.1f$" % (_mu, _tau), color=_color) plt.fill_between(x, nor.pdf(x, _mu, scale=1. / np.sqrt( as pm temperature = challenger_data[:, 0] D = challenger_data[:, 1] # defect or not? # notice the`value` here. We explain why below. beta = pm.Normal("beta", 0, 0.001, value=0) alpha = pm.Normal("alpha", 0, 0.001, value=0) @pm.deterministic def p(t=temperature, alpha=alpha, beta=beta): return 1.0 / (1. + np.exp(beta * t +. = pm.Bernoulli("bernoulli_obs", p, value=D, observed=True) model = pm.Model([observed, beta, alpha]) # Mysterious code to be explained in Chapter 3 map_ = pm.MAP(model) map_.fit() mcmc = pm.MCMC(model) mcmc.sample(120000, 100000, 2)
[-----------------100%-----------------] 120000 of 120000 complete in 8.9 sec
We have trained our model on the observed data, now we can sample values from the posterior. Let's look at the posterior distributions for $\alpha$ and $\beta$:
alpha_samples = mcmc.trace('alpha')[:, None] # best to make them 1d beta_samples = mcmc an artificial dataset which we can simulate. The rationale is that if the simulated dataset does not appear similar, statistically, to the observed dataset, then likely our model is not accurately represented the observed data.
Previously in this Chapter, we simulated artificial datasets, value=D, observed=True)
Hence we create:
simulated_data = pm.Bernoulli("simulation_data", p)
Let's simulate 10 000:
simulated = pm.Bernoulli("bernoulli_sim", p) N = 10000 mcmc = pm.MCMC([simulated, alpha, beta, observed]) mcmc.sample(N)
[-----------------100%-----------------] 10000 of 10000 complete in 1.4 sec
figsize(12.5, 5) simulations = mcmc.24 | 1 0.28 | 0 0.32 | 0 0.36 | 0 0.18 | 0 0.15 | 0 0.24 | 0 0.77 | 1 0.55 | 1 0.24 | 1 0.07 | 0 0.37 | 0 0.87 | 1 0.36 | 0 0.11 | 0 0.24 | 0 0.04 | 0 0.10 | 0 0.06 | 0 0.11 | 1 0.10 | 0 0.75 |.11 | 1 0.11 | 0 0.15 | 0 0.18 | 0 0.24 | 1 0.24 | 1 0.24 | 0 0.24 | 0 0.28 | 0 0.32 | 0 0.36 | 0 0.36 | 0 0.37 | 0 0.40 | 0 0.55 | 1 0.75 | 1 0.77 | 1 0.87 |")
Text(0.5,1,u$")
Text(0,0.5,u'$\\beta$')
from IPython.core.display import HTML def css_styling(): styles = open("../styles/custom.css", "r").read() return HTML(styles) css_styling() | http://nbviewer.jupyter.org/github/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/blob/master/Chapter2_MorePyMC/Chapter2.ipynb | CC-MAIN-2016-36 | refinedweb | 2,208 | 54.08 |
@teachorg/react-range
v1.4.4-teachorg
Published
Range input. Slides in all directions.
Downloads
15
Maintainers
Readme
react-range
FORKED from
See all the other examples and their source code!
Installation
yarn add react-range
Usage
import * as React from 'react'; import { Range } from 'react-range'; class SuperSimple extends React.Component { state = { values: [50] }; render() { return ( <Range step={0.1} min={0} max={100} values={this.state.values} onChange={values => this.setState({ values })} renderTrack={({ props, children }) => ( <div {...props} style={{ ...props.style, height: '6px', width: '100%', backgroundColor: '#ccc' }} > {children} </div> )} renderThumb={({ props }) => ( <div {...props} style={{ ...props.style, height: '42px', width: '42px', backgroundColor: '#999' }} /> )} /> ); } }
Features
- Range input supporting vertical and horizontal sliding
- Unopinionated styling, great for CSS in JS too
- No wrapping divs or additional markup, bring your own!
- Accessible, made for keyboards and screen readers
- Touchable, works on mobile devices
- Can handle negative and decimal values
- Stateless and controlled single component
- Typescript and Flow type definitions
- No dependencies, less than 4kB (gzipped)
- Coverage by e2e puppeteer tests
- RTL support
Keyboard support
taband
shift+tabto focus thumbs
arrow upor
arrow rightor
kto increase the thumb value by one step
arrow downor
arrow leftor
jto decrease the thumb value by one step
page upto increase the thumb value by ten steps
page downto decrease the thumb value by ten steps
<Range /> props
renderTrack
renderTrack: (params: { props: { style: React.CSSProperties; ref: React.RefObject<any>; onMouseDown: (e: React.MouseEvent) => void; onTouchStart: (e: React.TouchEvent) => void; }; children: React.ReactNode; isDragged: boolean; disabled: boolean; }) => React.ReactNode;
renderTrack prop to define your track (root) element. Your function gets four parameters and should return a React component:
props- this needs to be spread over the root track element, it connects mouse and touch events, adds a ref and some necessary styling
children- the rendered thumbs, thumb structure should be specified in a different prop -
renderThumb
isDragged-
trueif any thumb is being dragged
disabled-
trueif
<Range disabled={true} />is set
The track can be a single narrow
div as in the Super simple example; however, it might be better to use at least two nested
divs where the outter
div is much thicker and has a transparent background and the inner
div is narrow, has visible background and is centered.
props should be then spread over the outter bigger
div. Why to do this? It's nice to keep the
onMouseDown and
onTouchStart targets bigger since the thumb can be moved also by clicking on the track (in a single thumb scenario).
renderThumb
renderThumb: (params: { props: { key: number; style: React.CSSProperties; tabIndex?: number; 'aria-valuemax': number; 'aria-valuemin': number; 'aria-valuenow': number; draggable: boolean; role: string; onKeyDown: (e: React.KeyboardEvent) => void; onKeyUp: (e: React.KeyboardEvent) => void; }; value: number; index: number; isDragged: boolean; }) => React.ReactNode;
renderThumb prop to define your thumb. Your function gets four parameters and should return a React component:
props- it has multiple props that you need to spread over your thumb element
value- a number, relative value based on
min,
max,
stepand the thumb's position
index- the thumb index (order)
isDragged-
trueif the thumb is dragged, great for styling purposes
values
values: number[];
An array of numbers. It controls the position of thumbs on the track.
values.length equals to the number of rendered thumbs.
onChange
onChange: (values: number[]) => void;
Called when a thumb is moved, provides new
values.
onFinalChange
onFinalChange: (values: number[]) => void;
Called when a change is finished (mouse/touch up, or keyup), provides current
values. Use this event when you have to make for example ajax request with new values.
min (optional)
min: number;
The range start. Can be decimal or negative. Default is
0.
max (optional)
max: number;
The range end. Can be decimal or negative. Default is
100.
step (optional)
step: number;
The minimal distance between two
values. Can be decimal. Default is
1.
allowOverlap (optional)
allowOverlap: boolean;
When there are multiple thumbs on a single track, should they be allowed to overlap? Default is
false.
direction (optional)
direction: Direction; enum Direction { Right = 'to right', Left = 'to left', Down = 'to bottom', Up = 'to top' }
It sets the orientation (vertical vs horizontal) and the direction in which the value increases. You can get this enum by:
import { Direction } from 'react-range';
Default value is
Direction.Right.
disabled (optional)
disabled: boolean;
If
true, it ignores all touch and mouse events and makes the component not focusable. Default is
false.
rtl (optional)
rtl: boolean;
If
true, the slider will be optimized for RTL layouts. Default is
false.
getTrackBackground
There is an additional helper function being exported from
react-range. Your track is most likely a
div with some background. What if you want to achieve a nice "progress bar" effect where the part before the thumb has different color than the part after? What if you want to have the same thing even with multiple thumbs (aka differently colored segments)? You don't need to glue together multiple divs in order to do that! You can use a single
div and set
background: linear-gradient(...).
getTrackBackground function builds this verbose
linear-gradient(...) for you!
getTrackBackground: (params: { min: number; max: number; values: number[]; colors: string[]; direction?: Direction; rtl?: boolean; }) => string;
min,
max,
values and
direction should be same as for the
<Range /> component.
colors is a list of colors. This needs to be true:
values.length + 1 === colors.length;
That's because one thumb (one value) splits the track into two segments, so you need two colors.
Motivation
There is a native input solution:
<input type="range" />
However, it has some serious shortcomings:
- vertical-oriented slider is not supported in all browsers
- supports only a single direction
- very limited styling options
- no support for multiple thumbs
There are also many
React based solutions but most of them are too bloated, don't support styling through CSS in JS or have lacking performance.
react-range has two main goals:
- Small footprint - less then 4kB gzipped, single component.
- Bring your own styles and HTML markup -
react-rangeis a more low-level approach than other libraries. It doesn't come with any styling (except some positioning) or markup. It's up to the user to specify both! Think about
react-rangeas a foundation for other styled input ranges.
End to end testing
This library is tightly coupled to many DOM APIs. It would be very hard to ensure 100% test coverage just with unit tests that would not involve a lot of mocking. Or we could re-architect the library to better abstract all DOM interfaces but that would mean more code and bigger footprint.
Instead of that,
react-range adds thorough end to end tests powered by puppeteer. server yarn test:e2e:dev #run the e2e tests
CI mode (storybook started on the background, quick, headless)
yarn test:e2e
Browser support
- Chrome (latest, mac, windows, iOS, Android)
- Firefox (latest, mac, windows)
- Safari (latest, mac, iOS)
- Edge (latest, windows)
- MSIE 11 (windows)
Contributing
This is how you can spin up the dev environment:
git clone cd react-range yarn yarn storybook
Shoutouts 🙏
Big big shoutout to Tom MacWright for donating the
react-range npm handle! ❤️
Big thanks to BrowserStack for letting the maintainers use their service to debug browser issues.
Author
Vojtech Miksu 2019, miksu.cz, @vmiksu | https://www.pkgstats.com/pkg:@teachorg/react-range | CC-MAIN-2022-40 | refinedweb | 1,200 | 56.15 |
.1: Object-Oriented Programming
About This Page
Questions Answered: How can I represent my program’s “world” to the computer? How can a larger program be built up from components — objects — that work together? How do I command an object in Scala?
Topics: Programming as conceptual modeling. Object-oriented programming: objects and methods, communication between objects. Singleton objects in Scala: calling a method, referring to an object.
What Will I Do? First, there’s some very important reading to do. Then you’ll practice commanding objects in code.
Rough Estimate of Workload:? Two hours or so, if you go through each example carefully, as you should. The chapter may be challenging: the fundamental concepts of object-oriented programming may seem abstract or alien at first.
Points Available: A25.
Related Projects: IntroOOP (new).
Introduction
So far, we’ve covered an assortment of programming constructs: expressions, references, functions, data types, and so on.
Let’s put them all to the side for a moment.
In this chapter, we’ll adopt a new perspective as we discuss our chosen approach to creating larger programs: object-oriented programming.
In a sense, all the preceding chapters have been an overture for the work that begins in this chapter and continues throughout O1. At first, it may not be clear how what you learned earlier connects with this chapter, but by the time you finish the chapter, you should already have a fair inkling. As it happens, the overture was composed out of those elements precisely because they’ll soon come in handy as you learn to program in an object-oriented way.
Conceptual Modeling
A failure to communicate
Programs manipulate diverse forms of data. Just adding numbers, concatenating strings, or drawing circles won’t get us far. We want our program to “store a file”, “enroll a student in a course”, “react to a button click”, “withdraw money from an account”, or “pick the enemy’s next move”.
We face a problem: human thought and language are conceptual, but the computer doesn’t understand real-world concepts. What is a “course”, a “student”, a “hotel”, or a “bank account”?
A programming language cannot unambiguously define all the countless concepts that we might want to use in programs. Nevertheless, the computer needs unambiguous instructions if it is to execute programs that operate on the “world”, or domain, of the program.
Bridging the communication gap
Let’s take our human needs as our point of departure. Let’s create programs that a machine can process but that reflect people’s ways of making sense of the world. Let’s specify the concepts and terms that we need for the computer!
As programmers, our task will be to model the program’s domain: we’ll create a conceptual model of the relevant information and the behavior that is associated with each concept. Since the programming language itself doesn’t define all the concepts we want, we need the language to help us express definitions of our own.
Object-Orientation, A Programming Paradigm
Over the years, people have come up with a number of different ways to program, so-called programming paradigms (ohjelmointiparadigma). Different programming paradigms have their own recommendations about how to think about programs, how to represent a program’s domain, and how to structure program code.
In O1, we’ll focus on a particular paradigm: object-oriented programming or OOP (olio-ohjelmointi). OOP is a popular and interesting way to program. The Scala language is well suited to OOP.
Paradigms in O1
It’s possible to draw from several paradigms at once. The Scala language, for instance, has been deliberately designed to support multiple approaches to programming and to combine approaches. In O1, we’ll combine OOP especially with what is known as imperative programming (imperatiivinen ohjelmointi) and also with functional programming (funktionaalinen ohjelmointi). No need to worry about that now, though; we’ll discuss paradigms in some more detail in Chapter 10.2.
OOP has a number of attractive characteristics. In O1, we’ll embrace objects rather unquestioningly. Even so, you may wish to make a mental note of the fact that OOP is not a silver bullet that always works for any and all programming needs. Chapter 10.2 and O1’s follow-on courses will introduce you to alternative approaches.
Object-oriented programming emphasizes conceptual modeling. Each individual course, student, back account, GUI window, or button can be represented as an “object”.
Objects
object: something mental or physical toward whichthought, feeling, or action is directed
—Merriam-Webster Online
Raindrops on roses and whiskers on kittens,bright copper kettles and warm woolen mittens,brown paper packages tied up with strings;these are a few of my favorite things!
—from The Sound of Music by Rodgers and Hammerstein
An object (olio) is an abstraction of an entity or “thing” in an object-oriented program. Typically, an object will have a number of behaviors, known as methods (metodi), that define how you can use the object. For instance, you might represent a car as an object that has methods such as “drive”, “add passenger”, “refuel”, “determine remaining fuel”, and so on.
Many objects also have attributes that describe the object’s permanent or variable characteristics. For instance, a car object might have attributes such as manufacturer, current location, the amount of fuel in the car’s tank, or the passengers currently inside the car.
It’s up to the programmer to choose which attributes and methods they want to associate with each object. That decision depends on which aspects of the domain (the “world”) are relevant to the problem at hand. For instance, a car’s manufacturer might be relevant in some programs but not in others. As a computer runs an object-oriented program, it stores objects in its memory as dictated by the programmer.
Pretty much anything can be represented as an object. Here are some examples in visual form:
As the image shows, an object can be associated with data (e.g., the ID number of a student object, the text of a button object) as well as actions (e.g., enrolling a student, removing a file). Notice, too, that objects may resemble each other — in other words, they may have the same type — as is the case for the two animal objects above.
Modeling a domain as objects
We’ll usually use a combination of objects to model a program’s domain. Here’s one simplified example as a diagram:
Objects associated with an imaginary course-enrollment system (cf. the Oodi system used at Aalto). Notice that the objects refer to each other and combine to form a whole.
Each object has specific responsibilities in making a program work. For instance, a course object might be charged with keeping track of which students have enrolled: it records new enrollments while adhering to restrictions on how many students can sign up. By combining the functionality of different objects, we can define the whole program’s behavior.
We can also use objects to represent a user interface. Consider the following GUI window, for example:
In a Scala program, that window can be represented as a combination of objects as shown in this diagram:
An Anthropomorphic Perspective on Objects
There’s a common metaphor that may help you wrap your brain around OOP.
You can think of objects as human-like actors. Each object is, if not sentient, at least capable in a narrow speciality.
An object is self-aware in the sense that it “knows” its own attributes. For instance, a car object knows the amount of gas left, and a course object knows the students who have enrolled.
An object is capable of receiving messages (“commands”, “requests”) that match its methods. You can command a course object to enroll a student, a car object to refuel, or a file object to delete the corresponding data from the hard drive. An object can’t act on a message unless it has a method available for that sort of message.
An object has a specific way of behaving, that is, a specific way of reacting to the messages it receives. The programmer defines the objects’ behavior using the tools of the programming language: for example, we can define objects that compute with numbers, record information in the computer’s memory, send messages to other objects, create new objects, and so forth.
An object is unfailingly obedient and complies with the programmer’s instructions to the letter. Object-oriented programming, too, requires diligence, precision, and unambiguosity of language.
Communication between Objects
A single object may not be able to do much, but multiple objects in organized co-operation can accomplish a great deal. Let’s take a look at two examples.
Example: GoodStuff
How does Chapter 1.2’s GoodStuff application operate, as a whole, when you use it through its GUI? For instance: what happens when the user clicks the Add experience button? At that point, the program is supposed to record a new experience and, if appropriate, move the happy face that marks the user’s favorite experience.
The objects of the GoodStuff program make use of each other’s methods: they “command each other”. The program’s execution happens as the objects message each other and react to those messages. The presentation below should give you an idea of how this works. Even though this depiction is visual, it’s consistent with the actual technical implementation within the GoodStuff project.
Above, object communication was shown as speech bubbles, but it’s possible to express the same messages as Scala code; see below. Even though these commands aren’t familiar yet, perhaps you’ll hazard a guess as to which bubble each Scala command matches.
new Experience(name, description, price, rating)
this.experiences += newExperience
newExperience.chooseBetter(this.fave)
if (this.isBetterThan(another)) this else another
this.favorite = newExperience
category.addExperience(newExperience)
You’ll learn each command’s precise meaning later.
Example: course enrollment
(The second example of communicating objects, below, is essentially similar to the previous one. You would do well to study this example, too. However, you may skip it if you’re in a dreadful hurry or if everything seems crystal clear. It’s forbidden to skip the example just because you’re lazy.)
Let’s consider an imaginary application program where a student can enroll in courses by clicking a GUI button associated with the desired course. At this point, the program should confirm whether the student can be successfully enrolled. Enrollment is successful if there’s enough space in the lecture hall and if the same student hasn’t already enrolled. Upon enrollment, the program should record the student in the course’s list of enrollees and update the student’s list of courses they’re personally enrolled in.
Here’s a sketch of an object-oriented solution:
Reflections on Object-Oriented Programs
Program state
The states of individual objects form the state of the object-oriented program. In GoodStuff, the program’s state encompasses the state of a category object (its name, favorite, and list of recorded experiences) as well as the data associated with each experience object (descriptions, ratings, and prices).
As an object acts on the messages that it receives, its state sometimes changes, as does the state of the whole program. Examples: a new experience is added in a category, a new student is enrolled in a course, a user’s personal information is edited, etc.
Program execution
The objects of an object-oriented program form a conceptual model that structures the program so that it makes sense to a human. Each object has a particular role in the interplay that unfolds within this structure as the program is run. Ultimately, though, a program run is just a sequence of commands being executed one after the other by the computer. The programmer defines these commands and attaches them to objects as methods. Some of the methods cause objects to issue further commands to each other: it’s as if one object passes the turn to act to another object and waits for the other object to finish; only a single object is active at a time.
The methods on objects implement small algorithms, which combine to produce the overall algorithm that accomplishes what the program is meant to do.
Well, to be more exact...
What was just said is a simplification. It’s possible to create programs where multiple objects work (genuinely or at least virtually) simultaneously. However, we won’t be covering concurrency or parallel computing in O1.
A Closer Look at Messages
If this chapter still seems to be disconnected from the earlier ones, that’s about to change.
Method calls
Sending a message to an object activates one of the object’s methods. Sending such a message is known as calling a method (or invoking a method; metodikutsu). Here, we call a method named “drive” on a car object:
Some method calls simply request the object to report something about its state, such as the amount of fuel in the tank. Other calls are more elaborate:
Method parameters
Methods can receive parameters that convey additional information about what the object is supposed to do. In this example, the amount of fuel is provided as a method parameter (which is highlighted in yellow):
Different kinds of values can be passed as parameters: numbers, references to other objects, and so forth.
A method may receive one or more parameters. Or zero, if the message itself says it all, like here:
Responses from object a.k.a. return values
Often, an object will answer a method call in some way, sending a message to the caller as a response. We say that the method returns a value. One use for a return value is to acknowledge an operation’s success or failure:
A method’s return value may also report something about the object’s state:
Questions about objects and methods
Methods are functions attached to objects
What was said above about calling methods, passing parameters to method, and receiving return values is very similar to what you learned about functions in Week 1. This is of course not a coincidence. Methods are functions that are associated with objects. They have access to the object’s data and they take care of things that fall within the object’s purview. They define what the object can do.
What we’ve called communication between objects is essentially one object’s functions calling other objects’ functions.
We’re now ready to place the fundamental concepts of object-oriented programming in our concept map:
Objects and Scala
The Scala language gives us the means to define singleton objects (yksittäisolio): we can write a piece of code that specifies the characteristics of one individual object. Once we’ve so defined an object, we can issue Scala commands to it.
In the rest of this chapter, you’ll continue to learn object-oriented programming by commanding some singleton objects that have been already defined for you. In the next chapter, you’ll get to define objects and methods of your own. (This is much as in Chapters 1.6 to 1.8 where you first used given functions before implementing your own.)
In order to command an object, we need to tell the computer which object we wish to address and which of its methods we wish to call. That’s precisely what we’ll do in the next example as we experiment with a particular object:
The parrot: an introduction
Let’s use a virtual “parrot”. This parrot object has a particular repertoire of phrases
(strings) that it has learned to “say” and that it recites whenever it “hears” a
familiar-sounding word. We can command the parrot to speak by calling a method named
respond and passing in a phrase that the parrot hears.
Calling the parrot’s
respond method. The method’s name is in red,
the parameter in yellow, and the return value in green.
Before we begin using the object, let’s import the contents of
o1.objects; we’ll need
them for the rest of this chapter. Our virtual parrot is one of the things defined in
that package, which comes within the IntroOOP project.
import o1.objects._import o1.objects._
Method calls in Scala
Let’s call
respond and pass in a string parameter:
parrot.respond("Would you like a cracker?")res0: String = Polly wants a cracker!
respond; the parrot “hears” it and responds to it.
Here are four more examples of calling
respond on different parameter values:
parrot.respond("Would you like some crackers?")res1: String = Polly wants a cracker! parrot.respond("So you're called Polly?")res2: String = Polly wants a cracker! parrot.respond("How are you?")res3: String = How! parrot.respond("Bye then, I guess.")res4: String = Bye!
What’s important here is for you to see how to command an object, which just happens to be a particular sort of parrot object. How the parrot has been programmed to pick its responses is not significant in itself, but it may be easier for you to follow the rest of the example if you know this much:
Go ahead and experiment with the parrot object. Choose IntroOOP as you launch the REPL
and remember to
import.
A study in parroting
Our example parrot’s repertoire contains two phrases. One, as you’ve seen, is “Polly
wants a cracker”. Call the parrot’s
respond method to find out what the other phrase
is. Hint: mention “rum”.
Affecting an Object’s State
So far, nothing about our example suggests that an object is anything more than a bunch
of functions that can be accessed via a particular name such as
parrot. But there’s
more to an object than that.
In the earlier examples, you saw that an object has data that defines the object’s state. That state may influence how the object reacts to method calls.
Some of an object’s data can be immutable (e.g., the manufacturer recorded for a car object) and indeed some objects’ state never changes. However, many objects have methods that have effects on state. For instance, a method might change a car object’s location or a bank-account object’s balance.)
Our parrot, too, has a method that affects the parrot’s state by expanding its repertoire:
parrot.respond("Time flies like an arrow")res5: String = Time! parrot.learnPhrase("Fruit flies like a banana")parrot.respond("Time flies like an arrow")res6: String = Fruit flies like a banana!
learnPhraseand pass a new phrase as a parameter. This method returns only
Unit, which means that no return value shows up in the REPL.
respondagain attests that learning took place: now the parrot recognizes something familiar and recites a phrase from its updated repertoire. Our object has a memory!
Another Example: A Virtual Radio
The object of the next exercise is a radio. Once again, we recommend that you work along in the REPL, trying the commands shown below and your own variations of them.
Our radio object represents the tuning interface of an FM radio (for an imaginary,
simplified device). It has four “presets” for quickly accessing particular (virtual)
radio channels. We can switch the radio to a preset station by calling the
method and passing in a parameter between 1 and 4. Let’s pick preset number two, for
instance:
radio.select(2)res7: String = 94,0 MHz: Radio Suomi
As it happens, the radio object has been pre-programmed so that preset number 2 is
94.0 megahertz. The
select method returns a string that informs us of the
frequency and the corresponding channel. Other presets produce other frequencies:
radio.select(4)res8: String = 98,5 MHz: Radio Helsinki
The method
tune “turns the hertz knob”. That is, it adjusts the frequency from the
current value by a given amount:
radio.tune(2)res9: String = 98,7 MHz: just static
An adjustment’s size is measured in “notches”. For this radio, one notch equals 100 KHz, so the command above increases the frequency by 2 * 100 kHz from what it was. (There’s no channel at that frequency.)
Notice that the radio object clearly is capable of keeping track of the frequency that it’s tuned to: it computes a new state for itself from its old state and the parameter value it receives. This object, too, has a memory.
Requesting the values of attributes
Our radio object stores information that we can request from it simply by writing the desired attribute’s name after the dot. Below, we ask for the currently selected frequency and the radio’s notch size, both of which are integers:
radio.frequencyKHzres10: Int = 98700 radio.notchKHzres11: Int = 100
You don’t need parameters or brackets when accessing these attributes.
Assigning to an attribute
The way our radio object has been programmed, we can assign values to some of its attributes.
Instead of calling the methods
tune or
select, we can set the frequency directly:
radio.frequencyKHz = 92200radio.frequencyKHz: Int = 92200
This is, in effect, a different-looking way to send a message to the object: “Set
your
frequencyKHz attribute to the value 92200.”
Using objects, an animation
On Objects and Abstractions
At the end of Chapter 1.6, we identified functions and variables as abstractions that make your work as a programmer easier.
Abstraction is also central to objects, in two different ways.
First, each object is an abstraction of some entity that it represents. The programmer has chosen to represent, in the object, certain aspects of the problem domain, while leaving out aspects that are less salient.
Second, each object has both an internal implementation and an external “façade” or interface through which programmers communicate with the object. The interface is an abstraction of the actual object: it includes the names of methods, the types of parameters and return values, and any other information that you need to use the object. The algorithms that implement each method aren’t part of the interface, nor are the details of how an object keeps track of its state. We’ll return to this important notion in Chapter 3.1.
In Conclusion: Objects as Static and Dynamic
This chapter is central our programming efforts in future chapters. With that in mind, please take the time to view the short presentation below. It frames this chapter’s topic in a broader context and sets the scene for the chapters to come.
In this chapter, you’ve seen examples of objects defined in Scala, and should now have a fair idea of how you can command an object whose interface you know. So what next? Here’s the plan:
- In Chapter 2.2, you’ll learn to implement singleton objects. You’ll see how to define an object and the way the object responds to messages.
- Chapter 2.3 introduces classes. You can use a class to define the characteristics of numerous similar objects in one go. First, you’ll learn to use a given Scala class, and then...
- ... in Chapter 2.4 you get to practice implementing classes of your own.
- During Weeks 2, 3, and 4, we’ll continue to explore the GoodStuff application and you’ll learn to understand the classes that implement that program. We’ll discuss many other example programs as well, to be sure.
- During the rest of O1, you’ll continue to learn concepts and techniques that you can apply as you build programs from classes and objects.
Summary of Key Points
- Object-oriented programming (OOP) is one way of defining the entities and concepts that make up a program’s domain, such that they can be manipulated by a computer.
- An object is a representation of a single entity or “thing”. Myriads of different things can be represented as objects.
- An object commonly has attributes, which are part of its state, as well as actions, called methods, which determine what you can do with the object.
- As an object-oriented program runs, the objects are held in the computer’s memory. Together, they form the program’s state.
- You command an object by calling its methods. Methods are functions attached to objects; their purpose is to process information associated with the object and to take care of tasks that have been consigned to the object.
- You can view the execution of an object-oriented program as communication between objects:
- An object can be defined so that it commands other objects. Objects communicate by calling each other’s methods. An object can delegate part of its task to another object.
- An object-oriented program, too, is ultimately executed by a computer as a sequence of instructions. Objects structure the problem domain; communication between objects produces the sequence in which instructions get executed.
- In Scala, you call a method like you call any function:
- You just have to indicate the target object, as in
someObject.methodName(parameters)
- Some objects have attributes that you can modify by assigning new values to them:
someObject.attributeName = newValue.
- Links to the glossary: object-oriented programming, object, method; abstraction; domain; state; static, dynamic; singleton.
parrotis defined in package
o1.objectsand refers to a particular object. | https://plus.cs.aalto.fi/o1/2018/w02/ch01/ | CC-MAIN-2020-24 | refinedweb | 4,207 | 55.74 |
Question: What are the advantages of the PI method of capital
What are the advantages of the PI method of capital budgeting?
Answer to relevant QuestionsHow does the accounting rate of return (ARR) differ from the internal rate of return (IRR)? List the advantages and disadvantages of the payback method. Herb E. Vore is considering investing in a Salad Stop franchise that will require an initial outlay of $100,000. He conducted market research and found that after tax cash flows on this investment should be about $20,000 a ...Meg O’Byte wants to buy a new computer for her business for Internet access on a cable modem. The computer system cost is $5,100. The cable company charges $200 (including the cable modem) for installation and has a $50 a ...Compare short-term and long-term investment strategies.
Post your question | http://www.solutioninn.com/what-are-the-advantages-of-the-pi-method-of-capital | CC-MAIN-2017-22 | refinedweb | 143 | 66.03 |
In this jdbc tutorial program we are going to learn about adding a new column in database table. Sometimes it happens that we have created a table and forgets to add some important column name into it. Later while retrieving the data from that table we come to know that the table doesn't contains that particular column we are searching for. So there is no need to get panic. We have the solution for this, we are describing it with the help of the simple example. Brief description given below:
Description of program:
Firstly we need to create a connection with the help of JDBC driver for connecting to the MySQL database. Remember, in this program we are going to add columns to an existing database table. After establishing the connection, it takes table name, column name and it's data type and at last add a new column in the table. If the column gets inserted or added in the table then it shows "Query OK, n rows affected" otherwise it will displays a message "Table or column or data type is not found!".
Description of code:
ALTER TABLE
table_name ADD
col_name data_type;
Above code is used for adding a new column in the database table and takes appropriate attributes:
table_name: This is a table name in which you want to add a new column name.
col_name: It is a name of the column that you want to add in a table.
data_type: This is a data type of new column.
Here is the code of program:
import java.io.*; import java.sql.*; public class AddColumn{ public static void main(String[] args) { System.out.println("Adding new column in:"); String col = bf.readLine(); System.out.println("Enter data type:"); String type = bf.readLine(); int n = st.executeUpdate("ALTER TABLE "+table+" ADD "+col+" "+type); System.out.println("Query OK, "+n+" rows affected"); } catch (SQLException s){ System.out.println("Tabel or column or data type is not found!"); } } catch (Exception e){ e.printStackTrace(); } } }
Database Table: Student Table
Output of program:
After adding a new column: Student Table
If you enjoyed this post then why not add us on Google+? Add us to your Circles
Liked it! Share this Tutorial
Discuss: Adding a New Column Name in Database Table
Post your Comment | http://www.roseindia.net/jdbc/jdbc-mysql/AddColumn.shtml | CC-MAIN-2015-14 | refinedweb | 381 | 64.71 |
m ordern plastic stool mould renany
What are the benefits and uses of mordern folding stools? -]
wheelie bin mould renany
Wheelie Bin Mould China Trade,Buy China Direct From Wheelie Bin ... 70 products ... 60 Liter Plastic Wheelie Waste Bin Dustbin pp injection Mould. Taizhou Huangyan JMT Mould Co., Ltd. US $10000-$100000 / Set. 1.0 Sets (Min ...
China Safety Cheap Mordern Plastic Small Children Stool Mold - China Second Hand Stool Mold, Second Hand Stool Mould
China Safety Cheap Mordern Plastic Small Children Stool Mold, Find details about China Second Hand Stool Mold, Second Hand Stool Mould from Safety Cheap Mordern Plastic Small Children Stool Mold - Taizhou Huijie Plastic Mould Co., Ltd.
affordable dustbin injection mould rate
We can make all kinds of plastic dustbin moulds, garbage bin moulds. There are many sizes of garbage bins, 4L,10L,12L,20L,30,50L,60L,80L,100L,120L dustbin mould. We can make new designs for dustbins to satisfy the market. PRE Post: medical garbage bin mold queensland.
Mordern, Luxurious and Upholstered plastic moulded chair - Alibaba.com
plastic baby chair injection mould stool seat mold plastic chair and table moulds manufacture US $5000 / Set 1 Set (Min. Order) 5 YRS Taizhou Youneng Plastic Mould Co., Ltd. 91.9% 5.0 (27) "Helpful factory" "Good service" ...
import bottle snap cap injection mold manufacturer
Alibaba.com offers 5,625 plastic injection bottle cap mould products. A wide variety of plastic injection bottle cap mould options are available to you, such as shaping mode, product material, and product. China Plastic Bottle Cap Mould suppliers, Plastic Bottle
food grade buckets mould bunnings salvador.
Mordern Plastic Stool, प्लास्टिक स्टूल in Bawana Industrial Area, New Delhi , Shri Sai Techno Mould Pvt. Ltd. | ID ...
Shri Sai Techno Mould Pvt. Ltd. - Offering Mordern Plastic Stool, प ल स ट क स ट ल, Plastic Stool in Bawana Industrial Area, New Delhi, Delhi. Read about company. Get contact details and address| ID: 5038644462
Plastic Stacking Stools Factory, Custom Plastic Stacking Stools OEM/ODM Manufacturing Company - Made-in-China.com
Mordern Design Hugely Popular Aluminum Outdoor Bar Stool. Unit Price: US $ 50-510 / Set. Min. Order: 10 Sets. Add to Inquiry Basket. Pub Furniture Plastic Wood Chair Bar Stool (accept customized) Pub Furniture Plastic Wood Chair Bar Stool (accept customized) Unit Price: US $ 50-530 / Set.plastic injection mold product>m ordern plastic stool mould renany | https://www.krishnachandracollege.in/product/165-modern-plastic-stool-mould-renany.html | CC-MAIN-2021-49 | refinedweb | 399 | 67.45 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.