text
stringlengths
20
1.01M
url
stringlengths
14
1.25k
dump
stringlengths
9
15
lang
stringclasses
4 values
source
stringclasses
4 values
Title Page No. Chapter 1 Introduction/Objective 3 Chapter 2 System Analysis 2.1 Identification on Need 2.2 Preliminary Investigation 2.3 Feasibility Study 2.4 Project Planning 2.5 Project Scheduling 2.6 Software Requirements Specification 2.7 Software Engineering Paradigm applied. 2.8 Use Case Diagrams, ER Diagrams Chapter 3 System Design 3.1 Modularisation details. 3.2 Data Integrity & Constraints. 3.3 Database Design 3.4 Use Interface Design (no available sanwar) Chapter 4 Coding 4.1 Complete Project Coding 4.2 Comments & Description 4.3 Standardization of the coding/ code efficiency 4.4 Error Handling 4.5 Parameters calling/passing 4.6 Validation checks. Chapter 5 TESTING 5.1 Testing Techniques & Strategies. 5.2 Debugging & Code improvement Chapter 6 System Security Measures 6.1 Database/Data Security 6.2 Creation of User Profiles & Access Rights Chapter 7 Cost Estimation of Project Chapter 8 Reports (Layouts) Chapter 9 Future Scope & Further Enhancement of the Project BIBLIOGRAPHY GLOSSARY Chapter 1 Introduction ________________________________________________________________________ The purpose of this project is to develop an On-line Doctor Finder System that provides customers/Patent with the facility to search doctor and get appointment on-line. The system will provide all the facilities to its customers when their authentications [user id and password] match, including viewing account information, performing transfers, giving the customer an option of changing address, password retrieval, performing transactions, viewing appointments. The system should also support an online enrolment facility for new customers. The administrator should have the ability to perform various operations like inserting all details regards Hospital for the customer and performing functions like providing facility to user to search easily, when the customers want take appointment they have to register first and then admin verify their status after cheeking all details . The administrator also has the privilege to close the customer‘s account on the request of the customer. The customer should be able to access his/her account from anywhere just by inputting the correct user-id and password. Chapter 2 System Analysis ________________________________________________________________________ Identification on need Need to locate a provider quickly? Our online Doctor Finder (provider search) gives you flexibility in a simple format. Be sure to check your criteria for the provider search webpage most appropriate for your plan. This online Doctor Finder helps you find a perfect match for your medical needs Doctor Finder provides you with basic professional information on virtually every licensed physician. While it is our goal to provide the most up to date information, our provider network is constantly developing. Always verify that the provider you choose is participating in our network before you receive services Schedule appointments 24 hours a day, 7 days a week: Whether it‘s 2:00 AM and your office is closed or it‘s 2:00 PM and your phones are busy—be there for your patients and fill your schedules, too. Turn your website traffic into real appointments: Potential patients are visiting your site right now—and leaving. In a matter of minutes Doctor Finder can allow these visitors to book appointments with you instantly. You receive the appointment details! Patients provide their reason for visit website, so your practice always runs smoothly. We send several appointment reminders to make sure your patients show up on time. Patients can even book appointments directly from your personal website after being a member of website. They can send query to the doctor and feedback to Admin. A visitor can also contact us by filling a simple form Search Hint: The optimal way to search for a physician by name is to search by Last Name only and the State. You may also want to perform a "sounds-like" search if you are unsure of the exact spelling of a name or city, or if your search did not return the desired results. This option is available beneath the name and address fields on the "Search by Physician Name" page. The optimal way to search for a physician by specialty is to select a Specialty and State. If your search result is larger than the predetermined limit, you will be asked to modify the search by adding City and/or Zip Code. Occasionally, a physician is difficult to locate because: The physician has moved to a different state and the AMA has not yet received the new address; A small number of physicians have requested a "no contact" designation on their AMA records (no contact records are managed like an unlisted phone number and are not released); Physicians without active licenses do not appear in Doctor Finder; The physician's name may have a space in it, like "Mc Donald" (use of the space is required); Doctor Finder uses the primary medical specialty designated by the physician (your physician may practice more than one medical specialty). 2.2 Preliminary Investigation: In this process, the development team visits the customer and studies their system. They investigate the need for possible software automation in the given system. By the end of Preliminary Investigation, the team furnishes a document that holds the different specific recommendations for the candidate system. It also includes the personnel assignments, costs, project schedule, and target dates. Main Tasks of the preliminary investigation phase are: Investigate the present system and identify the functions to be performed Identify the objectives of the new system. In general, an information system benefits a business by increasing efficiency, improving effectiveness, or providing a competitive advantage Identify problems and suggest a few solutions Identify constraints, i.e. the limitations placed on the project, usually relating to time, money and resources Evaluate feasibility - whether the proposed system promises sufficient benefit to invest the additional resources necessary to establish the user requirements in greater detail To conclude the preliminary examination, the systems analyst writes a brief report to management in which the following are listed: The problem that triggered the initial investigation The time taken by the investigation and the persons consulted The true nature and scope of the problem The recommended solutions for the problem and a cost estimate for each solution The analyst should then arrange a meeting with management to discuss about the report and other matters if need be. The end result, or deliverable, from the Preliminary Investigation phase is either a willingness to proceed further, or the decision to abandon the project. . 2.3 Feasibility Study It is a test of system proposal, according to its workability, impact on application area, ability to meet user need, and effective use of resources. It focuses on four major questions: 1. What are the user‘s demonstrable needs and how does a candidate system meet them? 2. What resources are available for given candidate system? Is the problem worth solving? 3. What are the likely impacts of the candidate system on application area? 4. How well does it fit within the application area? These questions revolve around investigation and evaluation of the problem, identification and description of candidate system, specification of performance and the cost of each system, and final selection of the best system. Objective of feasibility study is not to solve the problem but to acquire a sense of its scope. During the analysis, the problem definition is crystallized and aspects of the problem to be included in the system are determined. Feasibility analysis is to serve as decision phase to have an analysis of questions that, is there a new and better way to do the job that will benefit the user, what are the cost and savings of the alternatives. Three key considerations are involved in feasibility analysis: economic, technical, and behavioural. 2.3. The benefits and savings that are expected from a candidate system are mainly in terms of time. When a user is directly able to handle a project through interfaces provided by OBS without the burden of coding for every kind of modification a lot of time and human effort is saved. There was a need of estimating the bearing cost on the resources needed (manpower and computing systems) for the development of the system. Full Cost estimation of the resources was done prior to the project kick off. There was procurement costs, consultations cost, purchase of equipments, installations cost and management cost involved with the development of the new proposed system. In addition, there are start up costs, and no new costs for operating system software, communications equipment installations, recruitment of new personnel, cost of disruption to the rest of the system required. There is further no need to purchase special applications software, do software modifications, training and data collection, and just a meager documentation preparation cost involved. Lastly, there is a system maintenance, depreciation or rental cost involved with the new system. 2.3.2 Technical Feasibility Technical feasibility centers on the existing computer system (hardware, software, etc.) and to what extent it can support the proposed addition. This phase involves financial considerations to accommodate technical enhancements. If the budget is a serious constraint, then the project is judged not feasible. System Technical feasibility is one of the most difficult areas to assess at this time of systems engineering. If right assumptions are made anything might seem possible. The considerations that are normally associated with technical feasibility include: 1) Development Risk: Can the system element be designed so that necessary function and performance are achieved within the constraints uncovered during the analysis of the present system? The new system proposes to bring significant changes to the present system to make it more efficient. The new system proposed meets all the constraints requirements and performance requirements identified for the system to become successful. 2) Resource availability: Are skilled staffs available for the development of the new proposed system? Are any other necessary resources available to build the system? The Participants working with the proposal are seniors who have sufficient knowledge and learning skills required to know about the development of the new system. There is also no need of any other special need of resources with the development of the proposed system and it can be very well developed using the computing and non-computing resources available within the present system. 3) Technology: Has the relevant technology progressed to a state that will support the system? Technology in the form of different works done in the related field is already available with the commercial world and has been already successively used in many areas. Therefore, there is no need of any special technology to be developed. The new system is fully capable of meeting the performance, reliability, maintainability and predictability requirements. The social and legal feasibility encompasses a broad range of concerns that include contracts, liability, infringement etc. Since the system is being developed by the students of the institute themselves there are no such concerns involved with the development of the proposed system. The degree to which alternatives are considered is often limited by cost and time constraints. However variations should be considered which could provide alternative solutions to the defined problem. Alternative systems that could provide all the functionality of the desired system are not available and hence the present solution is itself the most complete solution of the defined problem. FBTS has a feasibility of around 95% to be implemented. The candidate system fully supports the existing computer system (hardware, software, etc). 2.3.3 Behavioural Feasibility People are inherently resistant to change, and computers have been known to facilitate change. An estimate should be made of how strong a reaction the user is likely to have towards the development of a system. The introduction of candidate system Work Planner will not require special effort to educate, sell and train the user on new ways of conducting the system. As far as performance of the system is concerned the candidate system will help attain accuracy with least response time and minimum of programmer‘s efforts through the user-friendly interface. 2.4 Project Planning Planning begins with process decomposition. The project schedule provides a road map for a software project manager. Using a schedule as a guide, the project manager can track and control each step in the software engineering process. 2.4.1 Project Tracking S. N. Work Task Description Timeline 1. Requirements Specification Complete specification of the system including the framing of policy etc. 1-2 2. Database creation List of tables and attributes of each of them. 2-4 3. High-level and Detailed Design High Level Design : E-R Diagram DFD Use case Diagram Class Diagram & etc. Detailed Design : Pseudo code or algorithm for each activity 4-7 4. Implementation of the front- end of the system Implementation of Login Screen Screen that giving various options for each login Screens for each of the options 7-10 5. Integrating the front-end with the database Screens connected to data base and updating data base as required. 10-11 6. Integration Testing The system should be thoroughly tested by running all the test cases written for the system. 11-12 7. Final Review Issues found during the previous milestone are fixed and the system is ready for the final review. 12-14 2.5 Project Scheduling REQUIREME NT ANALYSIS DATABASE CREATION DETAILED DESIGN IMPLEME NTATION INTEGRATI ON TESTING & FINAL REVIEW P R O C E S S 1-2 Weeks 2-4 Weeks 4-7 Weeks 7-10 Weeks 10-11 Weeks 11-14 Weeks TIME (In 16 weeks) 2.6 Software Requirement Specification 2.6.1 An Introduction to ASP.Net ASP. To clear the concept, let us take up an example of a shopping cart as follows. User adds items to a shopping cart. Items are selected from a page, say the items page, and the total collected items and price are shown the ASP.Net runtime codes and incorporates the state of the server side components in hidden fields. This way the server becomes aware of the overall application state and operates in a two-tiered connected way. ASP.Net Component Model: The ASP.Net component model provides various building blocks of ASP.Net pages. Basically it is an object model, which describes: Server side counterparts of almost all HTML elements or tags, like . Those codes which are directly managed by the CLR are called the managed code. When the managed code is compiled, the compiler converts the source code into a CPU independent intermediate language (IL) code. A Just in time compiler (JIT). Metadata and Assemblies Metadata is the binary information describing the program, which is either stored in a portable executable file (PE) or in the memory. Assembly is a logical unit consisting of the assembly manifest, type metadata, IL code and set of resources like image files etc. (5) Windows Forms This contains the graphical representation of any window displayed in the application. . (7) ADO.Net It is the technology used for working with data and databases. It provides accesses to data sources like SQL server, OLE DB, XML etc. The ADO .Net allows connection to data sources for retrieving, manipulating and updating data. (8) Windows Workflow Foundation (WF) It helps in building workflow based applications in Windows. It contains activities, workflow runtime, workflow designer and a rules engine. (9)Windows Presentation Foundation It provides a separation between the user interface and the business logic. It helps in developing visually stunning interfaces using documents, media, two and three dimensional graphics, animations and more. (10) Windows Communication Foundation (WCF) It is the technology used for building and running connected systems. (11) Windows Card Space It provides safety of accessing resources and sharing personal information on the internet. (12) LINQ It imparts data querying capabilities to .Net languages using a syntax which is similar to the tradition query language SQL. An architecture of .Net Framework ASP source code runs on the personal web server of ASP. The ASP Server dynamically generates the HTML and sends the HTML output to the client‘s web browser. 2.6.2 Why use ASP? Microsoft ASP.NET is more than just the next generation of Active Server Pages (ASP). It provides an entirely new programming model for creating network applications that take advantage of the Internet. Enhanced Reliability Easy Deployment New Application Models Developer Productivity Improved Performance and Scalability 2.6.3 The Advantages of ASP ASP has a number of advantages over many of its alternatives. Here are a few of them.‘s! 2.6.4 An Introduction to RDBMS A Relational Database Management System (RDBMS) is an information system that presents information as rows contained in a collection of tables, each table possessing a set of one or more columns. Now days, the relational database is at the core of the information systems for many organizations, both public and private, large ad small. Informix, Sybase, SQL Server are RDBMS having worldwide acceptance. Oracle is one of the powerful RDBMS products that provide efficient and effective solutions for database management. 2.6.5 The Features of SQL Server Scalability and Performance Realize the scale and performance you‘ve always wanted. Get the tools and features necessary to optimize performance, scale up individual servers, and scale out for very large databases. High Availability : SQL Server 2008 Always On provides flexible design choices for selecting an appropriate high availability and disaster recovery solution for your application. SQL Server Always On was developed for applications that require high uptime, need protection against failures within a data center (high availability) and adequate redundancy against data center failures Virtualization Support : Microsoft provides technical support for SQL Server 2005 and later versions for the following supported hardware virtualization environments: o Windows Server 2008 and later versions with Hyper-V o Microsoft Hyper-V Server 2008 and later versions o Configurations that are validated through the Server Virtualization Validation Program (SVVP).. Enterprise Security : o SQL Server delivers the most secure database among leading database vendors o SQL Server solutions provide everything you need to adhere to security and compliance policies—out of the box. This includes the most up to date encryption technologies built on our Trustworthy Computing initiatives. Management Tools: and Integration Services. Developers get a familiar experience, and database administrators get a single comprehensive utility that combines easy-to-use graphical tools with rich scripting capabilities. Development Tools Programmability Spatial and Location Services Complex Event Processing (StreamInsight) Integration Services Integration Services-Advanced Adapters Integration Services-Advanced Transforms Data Warehouse Analysis Services Analysis Services-Advanced Analytic Functions Data Mining Reporting Business Intelligence Clients Master Data Services Minimum system requirements are listed below: Hardware and Software Requirements T a b l e 1 : Processor Intel Core i3 RAM: 256 MB or more Operating System: Windows 2008 Server, Windows XP, 2007. Database SQL Server 2005 Hard Disk space: 50 MB Web Server ASP Server Web Internet Explorer 5.0 or higher, Google Chrome Software Visual Basic.Net 2010 2.7. Software Engineering Paradigm applied. Conceptual Model The first consideration in any project development is to define the projects life – cycle model. The software life – cycle encompasses all the activities required to define, develop, test, deliver, operate and maintain a software product. Different models emphasize different aspects of the life cycle, and no single model is appropriate for all types of software‘s. It is important to define a life – cycle model for each product because the model provides a basis for categorizing and controlling the various activities required to develop and maintain a software product. A life–cycle model enhances project manageability, resource allocation, cost control, and product quality. There are many life–cycle models, as: i. The Waterfall Model ii. The Prototyping Model iii. Spiral Model The Waterfall Model: The model used in the development of this project is Waterfall model. This is due to some of the reasons like The model is more controlled and systematic All the requirements are identified at the time of initiating the project. 2.7 Use Case Diagrams, ER Diagrams Use Case Diagram: Data Flow Diagram: ER Diagram: Chapter 3 System Design __________________________________________________________________ 3.1 Modularisation details. There are Three Categories of Users who can Use Application 1. Customer (Login User) 2. Admin(Super User) 3. Doctors ( Login doctor ) We can Divide Whole Application within 3 Modules 1) Admin Module Manage Profile : Admin can manage the profile of Users as well as Doctors who is registered their self after cheeking all details of user/doctor. Here manage Profile means, Admin activate or authorised the current user/doctor to become a member of this website. After being a member of this website a user or doctor is able to make transaction. Change password : Admin can also change their password. View all doctor and searching a doctor : Admin has full authority to view all details regards Doctor. Can also search the doctor by entering some basic details (e.g city ,state , specialist, name). View all user and searching a user : Admin has full authority to view all users who has been registered in this website. They can also search any particular user from database(e.g active ,city, state, name) Feedback user and doctor : User/Doctor can send feedback to admin and they can reply user/doctor to his/her In this page all details are given about website every one can contact us easily by call, message or email. Even any visitors or guest also can contact us. Add doctor details. Verify user account than user login (active ,deactivate account ) Verify doctor account than doctor login.(block doctor details option) Submit news and update news or delete news. 2) Doctor Module Login user General Profile update Make profile Education, hospital, degree, profile photo, degree snapshot. View user Query and solved. Inbox (view all message send by user ). Send feedback 3) User Module Profile update Change password Send feedback Search doctor(e.g by name, by city, by state, by specialist , by hospital ) Send query to doctor after searching disease specialist. View Query results. 4) Visitor Module View current news. Search doctor by name, city , state ,specialist , hospital Services How many register user on this websites And in left side show doctor list . Database: 3.3 Database Design Database Design: Registration Table for Doctor: Doctor Education: Registration Table for User: Login Table(For all Users & Admin): Appointment: Foram: Feedback: News: Chapter 4 Coding Main Class for all connection: using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Data; using System.Data.SqlClient; public class Class1 { SqlConnection con = new SqlConnection("server=.\sqlexpress; database=DoctorFinder; Integrated security=true"); public void open_con() { con.Open(); } public void close_con() { con.Close(); } public SqlConnection get_con() { return con; } } Code for Login Login : using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; using System.Data.SqlClient; using System.Data; using System.Web.Configuration; using System.Net.Mail; using System.Net; public partial class Login : System.Web.UI.Page { static string sq = ""; static string an = ""; Class1 obj = new Class1(); protected void lnkbtn_Click(object sender, EventArgs e) { MultiView1.ActiveViewIndex = 0; } protected void btnLogin_Click(object sender, EventArgs e) { try { string usr = txtEmailId.Text; string pwrd = txtPassword.Text; SqlCommand cmd = new SqlCommand("sp_login", obj.get_con()); cmd.CommandType = CommandType.StoredProcedure; cmd.Parameters.AddWithValue("U_Email_Id", usr); SqlDataAdapter ad = new SqlDataAdapter(cmd); DataTable dt = new DataTable(); ad.Fill(dt); if (dt.Rows.Count == 0) { lbl_error_message.Text = "InValid UserName"; } else if (dt.Rows[0][0].ToString().Trim() != pwrd) { lbl_error_message.Text = "InValid Password"; } else { if (dt.Rows[0][1].ToString().Trim() == "Admin" && dt.Rows[0][2].ToString().Trim() == "1") { string uname = dt.Rows[0][3].ToString(); string email = dt.Rows[0][4].ToString(); Session["email"] = email; Session["usr"] = uname; Response.Redirect("~/Admin/Home.aspx"); } else if (dt.Rows[0][1].ToString().Trim() == "Doctor" && dt.Rows[0][2].ToString().Trim() == "1") { Session["User"] = dt.Rows[0][5].ToString(); string uname = dt.Rows[0][3].ToString(); string email = dt.Rows[0][4].ToString(); Session["email"] = email; Session["usr"] = uname; Response.Redirect("~/Doctor/Home.aspx"); } else if (dt.Rows[0][1].ToString().Trim() == "User" && dt.Rows[0][2].ToString().Trim() == "1") { string uname = dt.Rows[0][3].ToString(); string email = dt.Rows[0][4].ToString(); Session["email"] = email; Session["usr"] = uname; Response.Redirect("~/User/Home.aspx"); } } } catch (Exception ex) { lbl_error_message.Text = ex.Message; } } protected void txtforgotpass_TextChanged(object sender, EventArgs e) { if (txtforgotpass.Text == "") { Label3.Visible = false; } else { Label3.Visible = true; } SqlDataAdapter da = new SqlDataAdapter("select SQuestion,U_Password,Answer from Login where U_Email_Id='" + txtforgotpass.Text + "'", obj.get_con()); DataTable dt = new DataTable(); da.Fill(dt); TextBox1.Text = dt.Rows[0][0].ToString(); an=dt.Rows[0][2].ToString(); } protected void Button1_Click(object sender, EventArgs e) { if (txt_Answer.Text == an) { lbl_Message.Text = ""; try { SqlDataAdapter ad1 = new SqlDataAdapter("select U_Password from Login where U_Email_Id='" + txtforgotpass.Text + "'", obj.get_con()); DataTable dt1 = new DataTable(); ad1.Fill(dt1); var fromAddress = new MailAddress("go2vks@gmail.com", "From Name"); var toAddress = new MailAddress(txtforgotpass.Text, "To Name"); const string fromPassword = "9934697942"; string subject = "Retrieve Password"; string body = "Your Current Password is:- " + dt1.Rows[0][0].ToString(); var smtp = new SmtpClient { Host = "smtp.gmail.com", Port = 587, EnableSsl = true, DeliveryMethod = SmtpDeliveryMethod.Network, Credentials = new NetworkCredential(fromAddress.Address, fromPassword) }; using (var message = new MailMessage(fromAddress, toAddress) { Subject = subject, Body = body }) { smtp.Send(message); } lbl_Message.Text = "Mail send to your e-mail id."; } catch (Exception ex) { lbl_Message.Text = "Could not send the e-mail - error: " + ex.Message; } } else { lbl_Message.Text = "please give correct answer..."; } } } Code for Registration: using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; using System.Data.SqlClient; using System.Data; public partial class Reg_Form : System.Web.UI.Page { Class1 obj = new Class1(); protected void Page_Load(object sender, EventArgs e) { try { if (IsPostBack == false) { SqlDataAdapter ad = new SqlDataAdapter("select * from Country", obj.get_con()); DataTable dt = new DataTable(); ad.Fill(dt); ddlCountry.DataSource = dt; ddlCountry.DataTextField = "CName"; ddlCountry.DataValueField = "CId"; ddlCountry.DataBind(); ddlCountry.Items.Insert(0, "select"); SqlDataAdapter da = new SqlDataAdapter("select * from Security_Question", obj.get_con()); DataTable dt1 = new DataTable(); da.Fill(dt1); ddlSQuestion.DataSource = dt1; ddlSQuestion.DataTextField = "SQuestion"; ddlSQuestion.DataBind(); } } catch (Exception ex) { lbl_Message.Text = ex.Message; } } protected void btnSubmit_Click(object sender, EventArgs e) { try { string gender = ""; if (RbtnMale.Checked) { gender = "male"; } else if (RbtnFemale.Checked) { gender = "Female"; } string path = "~\images\Users"; string img = path + FileUpload1.PostedFile.FileName; Server.MapPath(path + "\" + FileUpload1.FileName); FileUpload1.PostedFile.SaveAs(Server.MapPath(path + FileUpload1.FileName)); //FileUpload1.PostedFile.SaveAs(img); SqlCommand cmd = new SqlCommand("sp_insert_user", obj.get_con()); cmd.CommandType = CommandType.StoredProcedure; cmd.Parameters.AddWithValue("@UserName", txtname.Text); cmd.Parameters.AddWithValue("@FathersName", txtFname.Text); cmd.Parameters.AddWithValue("@DOB", txt_Dob.Text); cmd.Parameters.AddWithValue("@EmailId", txtEmail.Text); cmd.Parameters.AddWithValue("@Address", txtaddress.Text); cmd.Parameters.AddWithValue("@Country", ddlCountry.SelectedItem.ToString()); cmd.Parameters.AddWithValue("@State", ddlstate.SelectedItem.ToString()); cmd.Parameters.AddWithValue("@City", ddlCity.SelectedItem.ToString()); cmd.Parameters.AddWithValue("@PinCode", txtPinCode.Text); cmd.Parameters.AddWithValue("@Gender", gender); cmd.Parameters.AddWithValue("@Photo", img); cmd.Parameters.AddWithValue("@Password", txtpass.Text); string user1 = "User"; cmd.Parameters.AddWithValue("@UserType", user1); cmd.Parameters.AddWithValue("@SQuestion", ddlSQuestion.SelectedItem.ToString()); cmd.Parameters.AddWithValue("@Answer", txtAns.Text); obj.open_con(); cmd.ExecuteNonQuery(); obj.close_con(); lbl_Message.Text = "Submitted"; //Login Insert SqlCommand cmd1 = new SqlCommand("sp_insert_login", obj.get_con()); cmd1.CommandType = CommandType.StoredProcedure; cmd1.Parameters.AddWithValue("@U_Name", txtname.Text); cmd1.Parameters.AddWithValue("@U_Password", txtpass.Text); cmd1.Parameters.AddWithValue("@SQuestion", ddlSQuestion.SelectedItem.ToString()); cmd1.Parameters.AddWithValue("@Answer", txtAns.Text); cmd1.Parameters.AddWithValue("@U_Email_Id", txtEmail.Text); cmd1.Parameters.AddWithValue("@User_Type", user1); cmd1.Parameters.AddWithValue("@Verify_Status", "0"); obj.open_con(); cmd1.ExecuteNonQuery(); obj.close_con(); } catch (Exception ex) { lbl_Message.Text = ex.Message; } } protected void ddlCountry_SelectedIndexChanged(object sender, EventArgs e) { try { SqlDataAdapter ad = new SqlDataAdapter("select SId, SName from State where CId='" + ddlCountry.Text + "'", obj.get_con()); DataTable dt = new DataTable(); ad.Fill(dt); ddlstate.DataSource = dt; ddlstate.DataTextField = "SName"; ddlstate.DataValueField = "SId"; ddlstate.DataBind(); ddlstate.Items.Insert(0, "select"); } catch (Exception ex) { lbl_Message.Text = ex.Message; } } } protected void ddlstate_SelectedIndexChanged(object sender, EventArgs e) { try { SqlCommand CMD = new SqlCommand("SELECT CityId, CityName FROM City WHERE (SId = @SId);", obj.get_con()); CMD.Parameters.AddWithValue("@SId", Convert.ToInt32(ddlstate.SelectedValue)); SqlDataAdapter ad = new SqlDataAdapter(CMD); DataTable dt = new DataTable(); ad.Fill(dt); ddlCity.DataSource = dt; ddlCity.DataTextField = "CityName"; ddlCity.DataValueField = "CityId"; ddlCity.DataBind(); ddlCity.Items.Insert(0, "select"); } catch (Exception ex) { lbl_Message.Text = ex.Message; } } } Chapter 5 Testing 5.1 Testing Techniques & Strategies. Testing is vital to the success of any system. Testing is done at different stages within the development phase. System testing makes a logical assumption that if all parts of the system are correct, the goals will be achieved successfully. Inadequate testing or no testing leads to errors that may come up after a long time when correction would be extremely difficult. Another objective of testing is its utility as a user-oriented vehicle before implementation. The testing of the system was done on both artificial and live data. 5.1.1. Test Strategy The purpose of the Project Test Strategy is to document the scope and methods that will be used to plan, execute, and manage the testing performed within the comScore File Library Project. The purpose of the testing will be to ensure that, based on the solutions designed, the system operates successfully. 5.1.2 Unit Testing Unit testing focuses verification efforts on the smallest unit of software design, the software component or module. Using the component-level design description as a guide, important control paths are tested to uncover errors within the boundary of the module. The unit test is while box oriented and the steps can be conducted in parallel for multiple components. 5.1.3 Integration Testing Integration testing is a systematic technique for constructing the program structure while at the same time conducting tests to uncover errors associated with interfacing. The objective is to take unit tested components and build a program structure that has been dictated by design. Integration testing was conducted by testing as different modules like client server programs were tested that correct data is passing, retransmission module was tested that it is giving proper times, protocol system was tested that it is sending acknowledgement and if not received one retransmitting packets or not. The interfaces were tested thoroughly so that no unpredictable event should occur by pressing any button. 5.1.4 Validation Testing At the culmination of integration testing, software is completely assembled as a package, interfacing errors have been uncovered and corrected and a final series of software tests – validation testing may begin. Validation can be defined as successful when software functions in a manner that can be reasonably expected by the customer. Software validation is achieved through a series of black-box testing that demonstrate conformity with requirements. After each validation test case has been conducted, either the function or performance characteristics conform to specification and are accepted or a deviation from specification is uncovered and a deficiency list is created. In this case testing was done with a perception of user. Everything was integrated and made sure that data was passing from one class to another properly. The protocol is working properly with respect to client and server. Retransmission class was giving proper time and data was shown properly. All the things are at its place and desired output is coming after giving proper input. It was made sure that proper errors are generated if wrong inputs are given. 5.1.5 White Box Testing It focuses on the program control structure. Here all statements in program have been executed at least once during testing and that all logical conditions have been exercised. 5.1.6 System Testing System testing is done when the entire system has been fully integrated. The purpose of the system testing is to test how the different modules interact with each other and whether the system provides the functionality that was expected. It consists of the following steps. Program Testing System Testing System Documentation User Acceptance Testing 5.1.7 Regression Testing It is retesting of a previously tested program following modification to ensure that faults have not been introduced or uncovered as a result of the changes made, and that the modified system still meets its requirements. It is performed whenever the software or its environment is changed. 5.1.8 Functional Testing Functional Testing is performed to test whether a component or the product as a whole will function as planned and as actually used when sold. 5.1.9 Black Box Testing This is designed to uncover the errors in functional requirements without regard to the internal workings of a program. This testing focuses on the information domain of the software, deriving test case by partitioning the input and output domain of a programming - a manner that provides thorough test coverage. 5.1.10 Equivalence Partitioning In equivalence partitioning method we check O/P on different I/P. In MUM we check it by entering different i/p‘s to the system, by this we check whether it is working for all i/p‘s or not. 5.1.11 Boundary Value Analysis In boundary value analysis we check the values at the boundary values. Like at 0 th row or at the end of the rows. Because sometimes we use the array from position 1 but it actually takes the value from the 0 th position. So because of this the system fails at boundary values. In MUM we tested the application for all boundary values to check whether it is working fine or not. No. Requirement Essentialor Desirable Description of the Requirement Remarks RS1 The system should have a Essential A login box should appear when the system is invoked. The logins are assigned by Admin when the user opens an account with it. RS2 The system Essential Help about the various The policies (like should have help screens features of the system should be provided in sufficient detail in a Q&A format. commission charged for various operations etc) should also be part of the help. RS3 The system should ‗lock‘ the login id if wrong password is entered 3 times in a row Essential After 2 false attempts user should be given a warning and at the 3 rd false attempt should be locked. This is a must so as to prevent fraud users from logging into the system. RS4 User should have the facilty to change his passwords Desirable The login password and the transaction password should be different 5.1.12 Resource Management 5.1.12.1 Roles and Responsibilities Sr. No. Roles Responsibilities 1 QA Engineer Prepare and Update Test Cases Testing of builds Logging defects Verifying bug fixes Prepare Test Results for each build Prepare defect summary report for each build 2 QA Lead Prepare Test Plan Review of Test cases Verify /Suggest changes in test strategy Communicate changes on build dates and verifications 3 QC Manager Review of Test plan Overseeing testing activities 5.1.13 Test Schedule Since the project, deliverables are dynamic and so are the test schedules 5.1.14 Assumptions All functional requirements are properly defined and meet users‘ needs. The Developers performs adequate unit testing + code review before sending modules to QA. The Developers fix all the defects identified during unit testing prior to system testing. Else the defects should be mentioned in the release notes. The application will be delivered on the expected delivery date according to the schedule. Delivery and downtime delays shall cause adjustments in the test schedule and can become a risk for on time product delivery. QA team should be involved in initial project discussions and should have a working knowledge of the proposed production system prior to beginning integration and system testing. Change control procedures are followed. The number of test cases -has a direct impact upon the amount of time it takes to execute the test plan During the test process, all required interfaces are available and accessible in the QA environment Testing - occurs on the most current version of the build in the QA environment All incidents identified during testing are documented by QA and the priority and severity is assigned based upon previously defined guidelines The Project Manager is responsible for the timely resolution of all defects Defect resolution does not impede testing Communication between all groups on the project is paramount to the success of the project, therefore QA should be involved in all relevant project communication Sufficient time is incorporated into the schedule not only for testing, but also for unit testing by developer, test planning, verification of defect fixes, and regression testing by QA 5.1.15 Defect Classification The following are the defect priorities defined according to their precedence: Urgent: The defect must be resolved immediately for next build drop as testing cannot be preceded further. High: The defect must be resolved as soon as possible because it is impairing development and / or testing activities. System use will be severely affected until the defect is fixed. Medium: The defect should be resolved in the normal course of development activities. It can wait until a new build or version is created. Low: The defect repair can be put of indefinitely. It can be resolved in a future major system revision or not resolved at all. The following are the defect severities defined according to their precedence: Causes Crash: The defect results in the failure of the complete software system, of a sub-system, or of a software unit (program or module) within the system. Critical: The defect results in the failure of the complete software system, of a subsystem, or of a software unit (program or module) within the system. There is no way to make the failed component(s), however, there are acceptable processing alternatives that will yield the desired result. Major: The defect does not result in a failure, but causes the system to produce incorrect, incomplete, or inconsistent results, or the defect impairs the systems usability. Minor: The defect does not cause a failure, does not impair usability, and the desired processing results are easily obtained by working around the defect. Enhancement: The defect is the result of non-conformance to a standard, is related to the aesthetics of the system, or is a request for an enhancement. Defects at this level may be deferred. 5.1.16 Summary This chapter documented the results of the Quality Assurance Procedure and the different variety of tests that were performed on the system implemented in order to verify its completeness, correctness and user acceptability. This completes the total system development process. We see that the system developed totally satisfies all the requirements of the user and so is fully ready for user site deployment. 5.2 Debugging & Code Improvement. Debugging. Chapter 6 System Security Measure. You can view these layers as layers of protection. For each layer of security added, the system becomes more protected. Like a chain, however, the entire shield may be broken if there is a weak link. Server Security). Dynamic Page Generation. Database Connections. Table Access Control. User-Authentication Security. Session Security. VALIDATION CHECKS: I have use following type of checks: a. Data type b. Length c. Constraints d. Blank field e. Format Data type: I have use character type for character, number for numeric, and date for date type. No numeric field insert in date. Character never inputted in numeric field as phone no never accept character if any person input wrongly give message. When this problem is removed then user perform further operation Length: When we define a max length. Then it never accepts more data .for example if I define numeric length is 5 then it stores either equal to length or less than length. If user gives more character than required then display message and stop processing. Constraints: I have defined range of data if data is less than then display error with message. For example code of product is four-character purchase. The field of date must be 8 characters. Blank field: When users add data and some field is blank then it display message with out halt, But stop processing. Format: The pre define format is used not change daily to daily for example format of date DD/MM/YYYY: 01/01/2005 is used in all date type field. If user inserts an other format then display message. Public and Private Key Security. Chapter 7 MAINTENANCE: MAINTENANCE: Maintenance of the project is very easy due to its modular design and concept any modification can be done very easily. All the data are stored in the software as per user need & if user wants to change he has to change that particular data, as it will be reflected in the software every where. Some of the maintenance applied is: - BREAKDOWN MAINTENANCE:- The maintenance is applied when an error occurs & system halts and further processing cannot be done .At this time user can view documentation or consult us for rectification & we will analyze and change the code if needed. Example: - If user gets a error ―report width is larger than paper size‖ while printing report & reports can not be generated then by viewing the help documentation & changing the paper size to ‗A4‘ size of default printer will rectify the problem.‖ PREVENTATIVE MAINTENANCE: - User does this maintenance at regular intervals for smooth functioning (operation) of software as per procedure and steps mentioned in the manual. Some reasons for maintenance are: - · > Error Correction: - Errors, which were not caught during testing, after the system has, been implemented. Rectification of such errors is called corrective maintenance. >New or changed requirements:- when business requirements change due to changing opportunities. · > Improved performance or maintenance requirements: -Changes that is made to improve system performance or to make it easier to maintain in the future are called preventive maintenance. Advances in technology (Adaptive maintenance): - Adaptive maintenance includes all the changes made to a system in order to introduce a new technology. Chapter 8 Cost Estimation of Project 4.1.5 Cost Estimation Cost in the project is due to the requirements in the software, hardware, and human resources. The size of the project is the primary factor for cost; the other factors have a lesser effect. Constructive Cost Model (COCOMO model) developed by Boehm helps to estimate the total effort in terms of person- months of the technical staff. Overview of COCOMO The COCOMO cost estimation model is used by thousands of software project managers, and is based on a study of hundreds of software projects. Unlike other cost estimation models, COCOMO is an open model, so all of the details are published, including: The underlying cost estimation equations Every assumption made in the model (e.g. "the project will enjoy good management") Every definition (e.g. the precise definition of the Product Design phase of a project) The costs included in an estimate are explicitly stated (e.g. project managers are included, secretaries aren't) Because COCOMO is well defined, and because it doesn't rely upon proprietary estimation algorithms, Costar offers these advantages to its users: COCOMO estimates are more objective and repeatable than estimates made by methods relying on proprietary models COCOMO can be calibrated to reflect your software development environment, and to produce more accurate estimates Costar is a faithful implementation of the COCOMO model that is easy to use on small projects, and yet powerful enough to plan and control large projects... Introduction. Basic COCOMO Model Source Lines of Code The COCOMO calculations are based on your estimates of a project's size in Source Lines of Code (SLOC). SLOC is defined such that: Only Source lines that are DELIVERED as part of the product are included -- test drivers and other support software is excluded SOURCE lines are created by the project staff -- code created by applications generators is excluded One SLOC is one logical line of code Declarations are counted as SL. The Scale driver In the COCOMO II model, some of the most important factors contributing to a project's duration and cost are the Scale Drivers. You set each Scale Driver to describe your project; these Scale Drivers determine the exponent used in the Effort Equation. The 5 Scale Drivers are: Precedentedness Development Flexibility Architecture / Risk Resolution Team Cohesion Process Maturity Note that the Scale Drivers have replaced the Development Mode of COCOMO 81. The first two Scale Drivers, Precedentedness and Development Flexibility actually describe much the same influences that the original Development Mode did. Cost Driver. Check the Costar help for details about the definitions and how to set the cost drivers. COCOMO II order equation Effort Adjustment Factor COCOMO II schedule mequation Chapter 9 Snapshots Login Page: Registration Page: User Home: Future scope & Further Enhancement of the Project ________________________________________________________________________ This project can be enhanced to provide all the functionalities to the customers. It is unreasonable to consider a computer based information system complete or finished; the system continues to evolve throughout its life cycle, even if it‘s successful. It is the case with this system too. Due to the creative nature of the design, there remain some lapse-mistaken communications between the users and the developers. So, certain aspects of the system must be modified as operational experience is gained with it. As users work with the system, they develop ideas for change and enhancements. Bibliography 1. Herbertt Schildt, ―The Complete Reference – ASP.net Using C#‖. 2. Horstmann and Gary Cornell , ―ASP.Net Volume I‖. 3. Phil Hanna, ―The Complete Reference – AJAX2.0‖ . 4. Anisha Bhakaria, ―JSP in 21 days‖ . 5. Roger Pressman, ―Software Engineering‖. 6. Gray Booch, ―UML Guide‖ . 7. Ivan Bayross, ―SQL, PL/SQL‖. 8. Bill Kennedy, ―HTML Guide‖. 9. David Flanagan, ―JavaScript Guide‖. 10. Henry Korth, ―Database System Concepts‖. 11., ―ASP Docuementation‖. 12., ―Oracle Documentation‖. 13., ―Google Search‖. Glossary API – Application Programming Interface. DBMS- Database Management System A complex set of programs that control the organization, storage and retrieval of data. GUI- A Graphical User interface that has windows, buttons and menus used to carry out tasks. SQL Client (ADO.Net) Database Connectivity . SqlClient API – It supports application top jdbc Manager Communiactions. SqlClient Driver API – It supports jdbc Manager to Driver implementation Communication. ODBC – Open Database Connectivity. Project – Any piece of work that is undertaken or attempted. Report – Awritten document describing the findings of some individual or group. SQL – Strictured Query Language. Chapter APPENDIX APPENDIX A ABOUT THE OPERATING SYSTEM Windows is the world’s most popular operating system and one reason for this is its Graphical Use Interface (GUI). Windows lets users issue commands by clicking icons and work with programs within easily manipulated screens called (appropriately) windows. Windows 98 represents the marriage of the windows operating system and Internet accesses. This unique melding of form and function known as Web integration helps the user to perform routine computer tasks such as writing a letter while maintaining seamless access to the information we need from the Internet. Web integration also changes the way we interact with the windows operating system. Command and navigation procedures, as well as the look of the windows 98 interface, all more closely resemble their counterparts on the web. Windows 98 lets the user to manage the files and folders that contain them using the methodology of the Internet and the World Wide Web. Thus windows offer these advantages: Easier to use: With the desktop options such as single clicking to open files and the addition of browse buttons in every window. User can use multiple monitors with a single computer dramatically increasing the size of the workplace. Installing new hardware is easy because windows 98supports the Universal Serial Bus standard allowing to plug in new hardware and use it immediately without restarting computer. More reliable: User can support online website for answers to common questions and to keep copies of windows up-to-date. Windows 98 tools can help regularly and test hard disk and system files and even automatically fix some problems. The troubleshooters and the Dr. Watson diagnostic tool also help to solve computer problem. Faster: By using the maintenance wizard we can easily improve computers speed and efficiency. The power management feature allows newer computers to go into hibernation mode and awaken instantly instead of requiring shutting down and restarting computer. We can use the FAT32 file system to store files more efficiently and save hard disk space. True web integration: The Internet connection wizard makes connecting to the web simple. Using the web –style Active Desktop can view web pages as the desktop wallpaper. In Microsoft Outlook @ Express we can send e-mail and post messages to Internet news groups. More entertaining: Windows 98 supports DV, digital audio and VRML so can play high quality movies and audio on computer as well as see the full effect of web pages that use virtual reality features. Can also watch television broadcasts and check TV program listings by using Microsoft Web TV for windows. APPENDIX B ABOUT VISUAL BASIC .Net The need of today’s software development in a G.U.I based front-end tool, which can connect to relational database engines. This gives the programmer the opportunity to develop client/server based commercial applications. These applications give users the power and ease of use of a G.U.I with the multi-user capabilities of NT based RDBMS engines like SQL SERVER 2008. From the array of G.U.I based front-end tools I select Developer 2008 because as we know that developer 2008 is a product of SQL SERVER 2008 corporation and it has best compatibility with SQL SERVER 2008 and most of all the security in VISUAL BASIC 6.0 is as same as in SQL SERVER 2008 database. SQL SERVER 2008 VISUAL BASIC .Net offers a host of technical advantages over many other front-end tools. APPENDIX C Introducti on to the SQL SERVER 2008 Server This chapter provides an overview of the SQL SERVER 2008 server. The topics include: Introduction to Databases and Information Management Database Structure and Space Management Memory Structure and Processes The Object-Relational Model for Database Management Data Concurrency and Consistency Distributed Processing and Distributed Databases Startup and Shutdown Operations Database Security Database Backup and Recovery Data Access INTRODUCTION TO DATABASES AND INFORMATION MANAGEMENT A database server is the key to solving the problems of information management. In general, a server must reliably manage a large amount of data in a multi-user environment so that many users can concurrently access the same data. All this must be accomplished while delivering high performance. A database server must also prevent unauthorized access and provide efficient solutions for failure recovery. The SQL SERVER 2008 server provides efficient and effective solutions with the following features: Client/server To take full advantage of a given computer system or network, environments (distributed processing) SQL SERVER 2008 SQL SERVER 2008 supports the largest of databases, which can contain terabytes of data. To make efficient use of expensive hardware devices, SQL SERVER 2008 allows full control of space usage. Many concurrent database users SQL SERVER 2008 supports large numbers of concurrent users executing a variety of database applications operating on the same data. It minimizes data contention and guarantees data concurrency. High transaction processing performance SQL SERVER 2008 maintains the preceding features with a high degree of overall system performance. Database users do not suffer from slow processing performance. High availability At some sites, SQL SERVER 2008 works 24 hours per day with no down time to limit database throughput. Normal system operations such as database backup and partial computer system failures do not interrupt database use. Controlled availability SQL SERVER 2008 can selectively control the availability of data, at the database level and sub-database level. For example, an administrator can disallow use of a specific application so that the application's data can be reloaded, without affecting other applications. Openness, industry standards SQL SERVER 2008 adheres to industry-accepted standards for the data access language, operating systems, user interfaces, and network communication protocols. It is an open system that protects a customer's investment. SQL SERVER 2008 also supports the Simple Network Management Protocol (SNMP) standard for system management. This protocol allows administrators to manage heterogeneous systems with a single administration interface. Manageable security To protect against unauthorized database access and use, SQL SERVER 2008 provides fail-safe security features to limit and monitor data access. These features make it easy to manage even the most complex design for data access. Database enforced SQL SERVER 2008 enforces data integrity, business rules that integrity dictate the standards for acceptable data. This reduces the costs of coding and managing checks in many database applications. Portability SQL SERVER 2008 software works under different operating systems. Applications developed for SQL SERVER 2008 can be ported to any operating system with little or no modification. Compatibility SQL SERVER 2008 software is compatible with industry standards, including most industry standard operating systems. Applications developed for SQL SERVER 2008 can be used on virtually any system with little or no modification. Distributed systems For networked, distributed environments, SQL SERVER 2008 combines the data physically located on different computers into one logical database that can be accessed by all network users. Distributed systems have the same degree of user transparency and data consistency as non-distributed systems; yet receive the advantages of local database management. Replicated environments SQL SERVER 2008 software lets you replicate groups of tables and their supporting objects to multiple sites. SQL SERVER 2008 supports replication of both data- and schema-level changes to these sites. SQL SERVER 2008's flexible replication technology supports basic primary site replication as well as advanced dynamic and shared-ownership models. The following sections provide a comprehensive overview of the SQL SERVER 2008 architecture. Each section describes a different part of the overall architecture.
https://www.scribd.com/document/221733126/final-DoctorFinder-report-1-docx
CC-MAIN-2018-39
en
refinedweb
a model function (or model_fn) implements the ML algorithm. the only difference between working with pre-made Estimators and custom Estimators is: with pre-made Estimators, someone already wrote the model function for you with custom Estimators, you must write the model function the model function we'll use has the following call signature: ### def my_model_fn( features, labels, mode, params ) the first line of the model_fn calls tf.feature_column.input_layer to convert the feature dictionary and feature_columns into input for your model, as follows: ### net = tf.feature_column.input_layer(features, params['feature_columns']) if you are creating a deep neural network, you must define one or more hidden layers. the layers API provides a rich set of functions to define all types of hidden layers, including convolutional, pooling, and dropout layers later on, these logits will be transformed into probabilities by the tf.nn.softmax function the Estimator framework then calls your model function with mode set to ModeKeys.TRAIN for each mode value, your code must return an instance of tf.estimator.EstimatorSpec, which contains the information the caller requires predictions holds the following three key/value pairs: class_ids holds the class id(0, 1 or 2) representing the model's prediction of the most likely species for this example probabilities holds the three probabilities logit holds the raw logit values for both training and evaluation we need to calculate the model's loss. this is the objective that will be optimized this function returns the average over the whole batch ### loss = tf.losses.spares_softmax_cross_entropy(labels=labels, logits=logits) tensorflow provides a Metrics module tf.metrics to calculate common metrics the tf.metrics.accuracy function compares our predictions against the true values, that is, against the labels provided by the input function the tf.train package provides many other optimizers--feel free to experiment with them the minimize method also takes a global_step parameter. tensorflow uses this parameter to count the number of training steps that have been processed(to know when to end a training run)
https://blog.csdn.net/XianxinMao/article/details/80334859
CC-MAIN-2018-39
en
refinedweb
: import java.lang.reflect.*; public class GetConstructorsExample { public static void main(String[] args){ Class cls = java.lang.String.class; Constructor constructor = cls.getConstructors()[0]; System.out.println(constructor.getName()); } } The above code takes the first element of the array returned by getConstructors() and retrieves the constructor name. When run, this program prints “java.lang.String” on the standard output: this is not surprising, since Java constructors are named after the class in which they are defined.
https://www.java-tips.org/java-lang/2511-how-to-retrieve-the-constructors-of-a-java-class.html
CC-MAIN-2018-39
en
refinedweb
Contains settings that define an individual column within list editors. public class MVCxListBoxColumn : ListBoxColumn Public Class MVCxListBoxColumn Inherits ListBoxColumn The MVCxListBoxColumn class implements the functionality of a list column. This functionality is used by list editors such as the ComboBox and ListBox. Columns can be used to visually organize an editor's list data into a more readable and convenient form. Columns are maintained within a collection of the MVCxListBoxColumnCollection type, which can be accessed using the Columns property available through the list editor's settings. An individual column can be accessed within the collection using indexer notation. An editor's column can be mapped to a data source's column by using the ListBoxColumn.FieldName property. A column's visibility and visual order can be controlled via the WebColumnBase.Visible and ListBoxColumn.VisibleIndex properties. Use the WebColumnBase.Caption property to define a column's caption, and the WebColumnBase.Name property to specify a unique identifier for the column. System.Object StateManager CollectionItem WebColumnBase ListBoxColumn MVCxListBoxColumn
https://documentation.devexpress.com/AspNet/DevExpress.Web.Mvc.MVCxListBoxColumn.class
CC-MAIN-2018-39
en
refinedweb
Provided by: lam-mpidoc_7.1.4-6build1_all NAME MPI_Waitsome - Waits for some given communications to complete SYNOPSIS #include <mpi.h> int MPI_Waitsome(int count, MPI_Request* reqs, int *ndone, int *indices, MPI_Status *stats) INPUT PARAMETERS incount - length of array_of_requests (integer) reqs - array of requests (array of handles) OUTPUT PARAMETERS ndone - number of completed requests (integer) indices - array of indices of operations that completed (array of integers) stats - array of status objects for operations that completed (array of Status), or the MPI constant MPI_STATUSES_IGNORE NOTES The array of indicies are in the range 0 to incount - 1 for C and in the range 1 to incount for Fortran. Null requests are ignored; if all requests are null, then the routine returns with outcount set to MPI_UNDEFINED .._IN_STATUS - The actual error value is in the MPI_Status argument. Note that if this error occurs and MPI_STATUS_IGNORE or MPI_STATUSES_IGNORE was used as the status argument, the actual error will be lost.)? LAM/MPI waitsome.c
http://manpages.ubuntu.com/manpages/eoan/man3/MPI_Waitsome.3.html
CC-MAIN-2020-40
en
refinedweb
Add a Custom Event Handler You will learn - How to write a custom event handler for CAP Java - Which event handler classes and methods are available In the following tutorials, you’ll learn that the CAP Java runtime can handle all CRUD events (create, read, update, and delete) triggered by OData requests out of the box. For now, we’ll show you how to do this manually, so that you can see how to easily write a custom event handler to extend the event handling process. Create the Java package, by creating a new folder called handlersunder srv/src/main/java/com/sap/cap/productsservice. Create the Java class file AdminService.javain the created handlersfolder, with the following content and make sure you Save the file: package com.sap.cap.productsservice.handlers; import java.util.HashMap; import java.util.Map; import org.springframework.stereotype.Component; import com.sap.cds.services.cds.CdsCreateEventContext; import com.sap.cds.services.cds.CdsReadEventContext; import com.sap.cds.services.cds.CdsService; import com.sap.cds.services.handler.EventHandler; import com.sap.cds.services.handler.annotations.On; import com.sap.cds.services.handler.annotations.ServiceName; @Component @ServiceName("AdminService") public class AdminService implements EventHandler { private Map<Object, Map<String, Object>> products = new HashMap<>(); @On(event = CdsService.EVENT_CREATE, entity = "AdminService.Products") public void onCreate(CdsCreateEventContext context) { context.getCqn().entries().forEach(e -> products.put(e.get("ID"), e)); context.setResult(context.getCqn().entries()); } @On(event = CdsService.EVENT_READ, entity = "AdminService.Products") public void onRead(CdsReadEventContext context) { context.setResult(products.values()); } } This class now handles the READ and CREATE events that target the Products entity of the AdminService. The READoperation just returns all entities kept in memory. The CREATEevent extracts the payload from the CQN representation and stores it in memory. CDS Query Notation (CQN) is the common language in CAP to run queries against services. It can be used to talk to the services defined by your model, but also remote services, such as the database. The event handler uses the following APIs, which are available for service providers in CAP Java: - Event handler classes have to implement the marker interface EventHandlerand register themselves as Spring Beans ( @Component). The marker interface is important, because it enables the CAP Java runtime to identify these classes among all Spring Beans. - Event handler methods are registered with @On, @Before, or @Afterannotations. Every event, such as an entity creation, runs through these three phases. Each phase has a slightly different semantic. You’ll learn more about these semantics in the subsequent tutorial. - The annotation @ServiceNamespecifies the default service name all event handler methods apply to. Here this is AdminService, as this was also the name when defining the service in the CDS model. - Event handler methods get an event-specific event context parameter, which provides access to the input parameters of the event and the ability to set the result. For example, let’s look at the CdsCreateEventContext contextparameter. The event we’re extending is the CREATEevent. The type of the context variable is specific to this extended CREATEevent. The onCreatemethod returns void, as the result is set by running: context.setResult(…). Stop your application if it’s still running by using CTRL+C in the terminal. Restart the application by running the following command in the terminal: cd ~/projects/products-service && mvn clean spring-boot:run Choose Open in New Tab when prompted. A new Browser tab should be opened with your application. Try to insert some data into the running application. For example, use cURL from a new terminal to run the following request. You can open terminals by choosing Terminal > New Terminal from the main menu. Execute the following command in the terminal to insert some data into the running application (while the process for the application is still running in the other terminal window): curl -X POST \ -H "Content-Type: application/json" \ -d '{"ID": 42, "title": "My Tutorial Product", "descr": "You are doing an awesome job!"}' The POST request causes an OData Insert on the entity Products of the service AdminService. The properties of the record to be created are passed in the body of the request ( -d argument in the curl command) as JSON (argument -H Content-Type: application/json in the curl command). The response will be the created record and should look similar to this output: `{"@context":"$metadata#Products/$entity","ID":42,"title":"My Tutorial Product","descr":"You are doing an awesome job!"}`. To read the data again, open the welcome page of the application again. Products from the app welcome page or add /odata/v4/AdminService/Productsto the app URL. You should see something like this: This is the record you’ve inserted in the previous step through the curl command. If the data isn’t formatted the way it is shown in the screenshot, use the JSON Formatter extension for Google Chrome or another JSON Formatter for your preferred browser. The data itself should be the same anyway. Great Job! You have successfully added custom logic to handle specific requests. The next tutorial will show you how to extend the application and build the products service from scratch. In addition, you’ll use an actual database as the persistence and see some of the features the CAP Java SDK provides out of the box, without a single line of custom coding.
https://developers.sap.com/tutorials/cp-cap-java-custom-handler.html
CC-MAIN-2020-40
en
refinedweb
PROBLEM LINK: Author: Vikas Yadav Tester: Keshav Kumar Editorialist: Vikas Yadav DIFFICULTY: EASY PREREQUISITES: PROBLEM: Find the number of ways of arranging the letters of given string such that all the vowels always come together. QUICK EXPLANATION: Count the number of Vowels and Consonants in the given string. If there is no vowel in the string then only calculate the factorial of number of consonants. Otherwise multiply the factorial of number of vowels and factorial of (number of consonants + 1). EXPLANATION: Let say length of the string is N. and there are C consonants & V vowels in the string. When the vowels are always together, they can be supposed to form 1 letter. Then, we have to arrange the consonants letters. Now because we have supposed that all vowels form 1 letter and there are C consonants totally they form (C+1)! different strings now vowels can be arrange among themselves in V! ways. So there will be (C+1)!*V! ways. Example: Let’s assume string is LEADING. It has 7 different letters. When the vowels E A I are always together, they can be supposed as one letter. Then, we have to arrange the letters L N D G (EAI). Now, (4 + 1) = 5 letters can be arranged in 5! = 120 ways. The vowels (EAI) can be arranged among themselves in 3! = 6 ways. Required number of ways: (120 * 6) = 720 For more details check Setter’s Solution. SOLUTIONS: Setter's Solution #include <stdio.h> //Custom function which counts number of vowels in a given string int isVowel(char ch) { char vowels[5] = "AEIOU"; for(int i = 0; i < 5; i++) { if(ch == vowels[i]) return 1; } return 0; } //Factorial function unsigned long long factorial(int n) { if(n == 0 || n == 1) return 1; else return n * factorial(n-1); } int main(void) { int test; scanf("%d", &test); while(test--) { int size, vowel_count = 0, consonant_count, i; scanf("%d", &size); char str[size+1]; scanf("%s", str); for(i = 0; i < size; i++) if(isVowel(str[i])) vowel_count++; consonant_count = size - vowel_count; unsigned long long total_ways; if(vowel_count) total_ways = factorial(vowel_count) * factorial(consonant_count + 1); else total_ways = factorial(consonant_count); printf("%llu\n", total_ways); } return 0; }
https://discuss.codechef.com/t/cmeet-editorial/74206
CC-MAIN-2020-40
en
refinedweb
This is part of the Ext JS to React blog series. You can review the code from this article on the Ext JS to React Git repo. The carousel component, popularized in Sencha Touch and now available in Ext JS in the modern toolkit, is similar to the tab panel in that views in a card layout are shown and hidden using a navigation bar. Unlike the tab panel, carousel navigation is simplified by dropping text and icons in favor of a simple nav element like a circle with “active” styling to show which child item in the card array is in view. In addition to interacting with the nav elements you can swipe to reveal neighboring cards in a carousel view. Note: While not a requirement for React, this article’s code examples assume you’re starting with a starter app generated by create-react-app. React Carousel Class Let’s look at an example of the carousel view in React. We’ll start by defining a Carousel class. First, we’ll need to install the react-swipeable-views package: npm install --save react-swipeable-views The react-swipeable-views package enables the animated card-swapping action as you navigate between cards using the dot indicators as well as dragging / swiping between cards. Users of the React Material UI library may recognize its use from the “Swipeable example” in the tabs demo. import React, { Component } from 'react'; import SwipeableViews from 'react-swipeable-views'; import './Carousel.css'; class Carousel extends Component { static defaultProps = { activecard: 0, className: '', position: 'bottom' } state = { activecard: this.props.activecard } render () { let { className } = this.props; className = className ? ` ${className}` : ''; const { children, position } = this.props; const { activecard } = this.state; const xPositions = ['top', 'bottom'], axis = xPositions.includes(position) ? 'x' : 'y'; return ( <div {...this.props} className = {`carousel ${position}${className}`} > <div className={`nav-strip`}> {React.Children.map(children, (child, i) => { const isActive = (i === activecard) ? 'active' : ''; return <div onClick={this.onNavClick.bind(this, i)} className={`nav ${isActive}`} > <span className="nav-dot"></span> </div>; })} </div> <SwipeableViews index={activecard} onChangeIndex={this.onNavClick.bind(this)} enableMouseEvents={true} axis={axis} > {React.Children.map(children, (child, i) => { let { className } = child.props; className = className ? ` ${className}` : ''; const isActive = (i === activecard) ? ' active' : ''; const cardProps = { ...child.props, style : {flex: 1}, className : ` card${isActive}${className}`, cardindex : i, activecard }; return React.cloneElement(child, cardProps); })} </SwipeableViews> </div> ); } onNavClick (activecard) { this.setState({ activecard }); } } export default Carousel; React Carousel Class Explained Above the class definition, we’re importing react-swipeable-views which we’ll use to wrap the child card items in the render method (described below). The static defaultProps property sets the defaults for various props on the Carousel. The constructor method sets the initial state and activecard property and the onNavClick method handles the nav element click that sets the activecard property which then styles the active nav element and shows the associated child card. The render method: - Combines any classNamestring passed in with those added by the class - Collects the activecardfrom the component state object to inform the nav elements / cards which is currently active / visible - In the return: - Create the wrapping Carouselelement that will house the nav element container and card container - The nav element container is added and we iterate over the child nodes (cards) passed to the Carouselto create navigation elements. The active nav element is styled as active when the cardindexmatches the activecard. - A react-swipeable-viewsinstance, SwipeableViews, is added to enclose the child cards passed to the Carousel. SwipeableViewsenables the swiping of cards into view in addition to interacting with the nav elements. - We loop over the child nodes, this time calling React.cloneElementin order to add a few props like className, cardindex, and activecardto the original nodes that were passed in. The cloneElementmethod allows us to effectively extend the child items by taking on additional props as needed. The cloned elements are returned in an array to be the child nodes of the SwipeableViewsparent. The card whose cardindexmatches the activeitemis shown while the other cards are hidden using CSS rules. React Carousel CSS The CSS used to render the Carousel view: .carousel { position: relative; } .nav-strip { display: flex; justify-content: center; position: absolute; pointer-events: none; top: 0; left: 0; right: 0; bottom: 0; z-index: 1; } .nav-strip + div, .nav-strip + div .react-swipeable-view-container { height: 100%; width: 100%; } .carousel.top .nav-strip { bottom: auto; } .carousel.bottom .nav-strip { top: auto; } .carousel.left .nav-strip { right: auto; } .carousel.right .nav-strip { left: auto; } .carousel.top .nav-strip, .carousel.bottom .nav-strip { flex-direction: row; } .carousel.left .nav-strip, .carousel.right .nav-strip { flex-direction: column; } .react-swipeable-view-container > div { flex-basis: 100%; background: #f7f7f7; } .nav { text-align: center; cursor: pointer; pointer-events: all; } .carousel.top .nav, .carousel.bottom .nav { padding: 12px 6px; } .carousel.left .nav, .carousel.right .nav { padding: 6px 12px; } .nav-strip .nav-dot { background-color: #d2d2d2; border-radius: 50%; height: 12px; width: 12px; display: inline-block; } .nav-strip .nav:hover .nav-dot { background-color: #b5b5b5; } .nav-strip .nav.active .nav-dot { background-color: #1e8bfb; } .carousel .card { padding: 12px; } React Carousel Example We can create a Carousel instance like: import React, { Component } from 'react'; import Carousel from './Carousel'; class App extends Component { render() { return ( <Carousel style={{ height: '400px', width: '600px' }}> <div>Content for the first panel</div> <div>... and the second panel</div> </Carousel> ); } } export default App; We pass in the style prop to give the rendered component explicit dimensions. We can pass a position prop to position the navigation indicators on the “top” or “bottom”. An activecard prop can also be passed to designate the initially active card view. Conclusion Hopefully the example demonstrates how easy it will be to get a carousel view built for your React applications. The example is relatively basic, but gets the job done for the most common use cases. It would be fairly easy to enhance the example and allow the carousel instance to stipulate whether the carousel is oriented horizontally as shown, or vertical. However, if you’re looking for a pre-built slider, look no further than react-slick for a very performant and highly configurable carousel view. >…
https://moduscreate.com/blog/ext-js-to-react-carousel/
CC-MAIN-2020-40
en
refinedweb
A subset of a mesh represented by a range of acceptable attribute values. More... #include <vtkMultiThreshold.h> A subset of a mesh represented by a range of acceptable attribute values. Definition at line 337 of file vtkMultiThreshold.h. Does the specified range fall inside the interval? For cell-centered attributes, only cellNorm[0] is examined. For point-centered attributes, cellNorm[0] is the minimum norm taken on over the cell and cellNorm[1] is the maximum. Print a graphviz node name for use in an edge statement. Implements vtkMultiThreshold::Set. Reimplemented from vtkMultiThreshold::Set. Definition at line 520 of file vtkMultiThreshold.h. The values defining the interval. These must be in ascending order. Definition at line 341 of file vtkMultiThreshold.h. Are the endpoint values themselves included in the set (CLOSED) or not (OPEN)? Definition at line 343 of file vtkMultiThreshold.h. This contains information about the attribute over which the interval is defined. Definition at line 345 of file vtkMultiThreshold.h.
https://vtk.org/doc/nightly/html/classvtkMultiThreshold_1_1Interval.html
CC-MAIN-2020-40
en
refinedweb
Raspberry Pi Cluster Node – 07 Sending data to the Slave This post builds on my previous posts in the Raspberry Pi Cluster series by adding the ability to receive data from the master. In this update, I will be adding a way for the slave to request data and have it returned by the master. Moving machine details into its own file The first thing that I am going to do is move the machine details currently in the slave, to a separate file. In the future, this will allow obtaining more information about the node. However, I am moving it into a separate file for now so the Slave and Master can access the data. For now, my machine file will include the following function and be accessible to both slave and master: import psutil import platform import multiprocessing import socket def get_base_machine_info(): return { 'hostname': socket.gethostname(), 'cpu_percent_used': psutil.cpu_percent(1), 'ram': psutil.virtual_memory().total, 'cpu': platform.processor(), 'cpu_cores': multiprocessing.cpu_count() } Configuring the Master to respond to information requests In the master message handling while loop I am going to add a new message type to be handled. The master will listen to any messages with the type info and return any information the slave requests. The payload will define what type of information it is looking for and return it. The following segment of code is the new elif statement used for info type messages. elif message['type'] == 'info': logger.info("Slave wants to know my info about " + message['payload']) if message['payload'] == 'computer_details': clientsocket.send(create_payload(get_base_machine_info(), "master_info")) else: clientsocket.send(create_payload("unknown", "bad_message")) Here I am checking if the message type is info and logging a message that the slave is requesting information about the specific payload. Each payload will require different handling and more types will be added in the future. For now I have added a single type computer_details to match the message type the slave sends the master. This calls the get_base_machine_info() function we earlier abstracted into a function, imported from the MachineInfo file. If the slave requests information about an unknown type a bad_message payload is created and returned to the slave. Going forward this will be a standard payload type that will be handled differently. Once the master has sent the requested data to the slave it continues to listen to messages and act on them. Configuring the slave to request information from the Master I have decided that as part of the initial hello to the master the slave will send its machine details, and request the same from the master. This is also refactored a little to move the piece of code handling the machine details into the above MachineInfo file. Below is the new handshake for the slave as it joins the cluster. logger.info("Sending an initial hello to master") sock.send(create_payload(get_base_machine_info(), 'computer_details')) sock.send(create_payload("computer_details", "info")) message = get_message(sock) logger.info("We have information about the master " + json.dumps(message['payload'])) Once we have sent our machine info we request the machine info of the master. This is again performed using create_payload with the type info and payload computer_details. Once we have sent the message asking for the master’s details we then use get_message to retrieve the reply from the master. This is used identically to how the master receives the slave’s messages and uses the same underlying shared code. Summary Now we have a structure which lets the master and slave communicate by sending and requesting information. In the next post I will look at adding a few more payloads to let the master control the slave further. These will form the basis of the master requesting the slave to perform computation. The full code is available on Github, any comments or questions can be raised there as issues or posted below.
https://chewett.co.uk/blog/1781/raspberry-pi-cluster-node-07-sending-data-to-the-slave/
CC-MAIN-2020-40
en
refinedweb
So Blazor WASM has arrived with the promise of enabling us all to build modern web applications using C# instead of javascript. But just how quickly can you go from dotnet new blazorwasm to something useful/interesting appearing in the browser? What better way to find out than to take the Marvel Developer API and use it to drive a Blazor WASM character search engine?! I find it difficult to build anything without visualising it first, so here’s a rough mockup of what I’m aiming for… Where to start? I’m using VS Code, but Visual Studio, Jetbrains Rider, even a terminal and text editor will work too! First we need a project. dotnet new blazorwasm -o MarvelSearch Then we can check to ensure it’s working… cd MarvelSearch dotnet watch run Now we should get the familiar Blazor project template (if you’ve seen it once, you’ve seen it a thousand times!) Use Watch to speed up development Here’s a pro tip, when you put watch in front of run your app will recompile every time you save your changes. This makes it much quicker to test your changes in the browser. Let’s get rid of the nav bars etc. so we have a really simple minimal page (think Google home page before they started showing doodles of the day and what not). By the time I’ve finished ripping out most of markup from Shared/MainLayout.razor I’m left with this. MainLayout.razor @inherits LayoutComponentBase <div class="main"> <div class="content px-4"> @Body </div> </div> I’ve updated Index.razor too. Index.razor @page "/" <h1>Marvel Search Engine</h1> Which leaves us with this super impressive starting point! I mean, we’re halfway there right?! Get the markup right first Before worrying about APIs and fetching data I’ll focus on the markup for the search input and search results. Bitter experience has taught me this is the part which can end up taking the longest with any web project (with my CSS skills at least) so probably best to get it out of the way first. After a little bit of tinkering, I can get to a form that looks close(ish) to my mockup… Index.razor @page "/" <h1 class="text-center text-primary">Marvel Search Engine</h1> <div class="text-center"> <div class="p-2"> <input class="form-control form-control-lg w-50 mx-auto mt-4" placeholder="Character name"/> </div> <div class="p-2"> <button class="btn btn-primary btn-lg">Search the Marvel API</button> </div> </div> So far I’m just building everything in Index.razor. Blazor is all about components, so it will likely make sense to start pulling some of this markup out into separate components soon. However, at this stage where we’re just prototyping, it’s quicker and easier to pull the markup around and try things out if we keep it all in one place. Then when we’re happy with it Blazor makes it trivial to pull sections of the UI out into their own components. Now to tackle the search result “cards”. It turns out Bootstrap 4.5 brings a simpler way to make cards automatically wrap onto multiple lines (using row-cols). The Blazor templates ship with a slightly earlier version (at the time of writing) so I’ve download the latest from here… Then replaced these two files with their newer counterparts. - wwwroot/css/bootstrap/bootstrap.min.css - wwwroot/css/bootstrap/bootstrap.min.css.map Bootstrap has a super handy card component for exactly this sort of thing! <div class="container"> <div class="row row-cols-1 row-cols-md-2 row-cols-lg-3"> <div class="col mb-4"> <div class="card"> <img src="" class="card-img-top"> <div class="card-body"> <h5 class="card-title">Spider-Man</h5> <p class="card-text"> Very spidery </p> </div> </div> </div> </div> </div> Looks like we have a reasonable first version of the UI. So far this has been all CSS and HTML but now the fun bit; we get to write C# Blazor code. Accept user input I want to grab whatever the user types into the character search box, then wire up the submit button to a handler (ready to make the Marvel API call). For this we need to do two things: - Bind the value of the Character Search input to a field - Wire up an event handler for the search button Index.razor <div class="text-center"> <div class="p-2"> <input class="form-control form-control-lg w-50 mx-auto mt-4" placeholder="Character name" @ </div> <div class="p-2"> <button class="btn btn-primary btn-lg" @Search the Marvel API</button> </div> </div> Now our markup expects a string field called _searchTerm and a method called HandleSearch. We can add these in a @code block in Index.razor. @code { private string _searchTerm; private async Task HandleSearch() { Console.WriteLine(_searchTerm); } } It turns out, Console.WriteLine does exactly what you’d hope in an application running in the browser; writes to the Developer Tools console. So now we know we can grab the search term entered by the user. Hello Marvel A quick visit to the Marvel Developer portal and a form or two later, I have a public API key I can use to search the Marvel database. Their interactive API tester has also given me the precise URL I need to use to search for Marvel characters whose name begins with a search term…<my-public-api-key> Now I just need a way to make this call from @code in Index.razor. Tweak the default HttpClient Blazor WASM projects come with an HttpClient pre-configured to use a base address of the current web site. You’ll find the code for this in Program.cs builder.Services.AddTransient(sp => new HttpClient { BaseAddress = new Uri(builder.HostEnvironment.BaseAddress) }); This would make all our API calls default to the address of the site (localhost:5000 in this case) so we’ll need to update that to point to the Marvel API. builder.Services.AddTransient(sp => new HttpClient { BaseAddress = new Uri("") }); Inject HttpClient into our component We can use dependency injection to inject HttpClient into the Index component. @page "/" @inject HttpClient HttpClient Then call it in HandleSearch. private MarvelSearchResult _searchResponse; private async Task HandleSearch() { var url = $"characters?nameStartsWith={_searchTerm}&apikey=<your-key-here>"; _searchResponse = await HttpClient .GetFromJsonAsync<MarvelSearchResult>(url, new JsonSerializerOptions { PropertyNamingPolicy = JsonNamingPolicy.CamelCase }); } I’ve also added a _searchResponse field to store the results. MarvelSearchResult is a standard C# class we can use to deserialise the result of the Marvel call. public class MarvelSearchResult { public string AttributionText { get; set; } public Datawrapper Data { get; set; } public class Datawrapper { public List<Result> Results { get; set; } } public class Result { public int Id { get; set; } public string Name { get; set; } public string Description { get; set; } public Image Thumbnail { get; set; } public class Image { public string Path { get; set; } public string Extension { get; set; } } } } I’ve simplified this, omitting most of the details the Marvel API call returns, keeping just the parts I want to show in the UI. Show data in the markup We’re almost there; time to render the results. When this component first loads, _searchResponse will be null, so we’ll want to add a defensive check… @if (_searchResponse != null) { } Then we can show the attribution text (you know, for legal reasons!) <p class="text-center">@_searchResponse.AttributionText</p> Finally we can loop over the search results and bind the details we want to show. @foreach (var result in _searchResponse.Data.Results){ } Pulling everything together we get this: Index.razor @if (_searchResponse != null) { <p class="text-center">@_searchResponse.AttributionText</p> <div class="container"> <div class="row row-cols-1 row-cols-md-2 row-cols-lg-3"> @foreach (var result in _searchResponse.Data.Results) { <div class="col mb-4"> <div class="card h-100"> <img src="@($"{result.Thumbnail.Path}.{result.Thumbnail.Extension}")" class="card-img-top" style="object-fit: cover; height: 300px"> <div class="card-body"> <h5 class="card-title">@result.Name</h5> <p class="card-text"> @result.Description </p> </div> </div> </div> } </div> </div> } Note I made a few tweaks to the img element. <img src="@($"{result.Thumbnail.Path}.{result.Thumbnail.Extension}")" class="card-img-top" style="object-fit: cover; height: 300px"> For some reason the Marvel API returns the image path in two parts (path and extension) so this puts the two together with a period between them. I also added a teeny tiny bit of inline styling just to make the images stay the same height even as we move between different screen sizes etc. The final result We have a working Marvel Search Engine! Now there’s plenty we could do from here to make this more useful, but it’s a pretty good start. It’s worth noting I spent much (much) more time fiddling with the CSS to make this look right than I did wiring up the API call and binding the results using Blazor! Here’s a quick recap of the entire process, from start to finish: - Start with a mockup - Build up the HTML (to get the rough look and feel) - Configure HttpClient’s base address in Program.cs - Use HttpClient to call an API (from your component) - Use defensive ifchecks to protect against null data (before the API call returns) With that, we’ve learned that building web applications with Blazor WASM is really quite fast… … and CSS is still hard! Source: Here’s the full code
https://jonhilton.net/blazor-wasm-calling-a-api/
CC-MAIN-2020-40
en
refinedweb
thing in that list i.e., automated tweeting. I tweet quite frequently and I would love to have a way of automating this as well. And that's exactly what we're going to do today. We are tweeting using python. We'll use a python library called tweepy for this. Tweepy is a simple, easy to use library for accessing Twitter API. Accessing twitter API's programmatically is not only just an accessibility feature but can be of enormous value too. Mining the twitter verse data is one of the key steps in sentimental analysis. Twitter chat bots have also become quite popular now a days with hundreds and thousands of bot accounts. This article, although, only barely scratches the surface, hopefully will helping in building yourself towards that. Setting Up First thing's first, install tweepy by running pip install tweepy. The latest version at the time of the writing this article is 3.5.0. Then we need to have our Twitter API credentials. Go to Twitter Apps. If you don't have any apps registered already, go ahead and click the Create New App button. To register your app you have to provide the following three things - Name of your application - Description - Your website url There is one more option which is callback URL. You can ignore that for now. Then after reading the Twitter developer agreement (wink wink), click on Create your Twitter application button to create a new app. Once the app is created you should see that in your twitter apps page. Click on it and GOTO the Keys and Access Tokens tab. There you will see four pieces of information. First you have your app API keys which are consumer key and consumer secret. Then you have your access token and access token secret. We'll need all of them to access twitter API's. So, have them ready. I have copied all of them and exported them as system variables. You could do the same or if you'd like, you can read them from a file as well. Let's get started First you have to import tweepy and os(only if you are accessing system variables). import tweepy import os Then I'll populate the access variables by reading them environment variables. consumer_key = os.environ["t_consumer_key"] consumer_secret = os.environ["t_consumer_secret"] access_token = os.environ["t_access_token"] access_token_secret = os.environ["t_access_token_secret"] With the keys ready, we setup the authorization. authorization = tweepy.OAuthHandler(consumer_key, consumer_secret) authorization.set_access_token(access_token, access_token_secret) After authorization we create an API object twitter = tweepy.API(authorization) And now you can tweet from python using this twitter.update_status("Tweet using #tweepy") That is all you have to do. Just five lines of code and you can already tweet. You should try it out and check your twitter account. I just ran this command and this is the tweet. December 24, 2017 Not just this, you can also tweet media. Let's tweet again, this time with a picture attached. image = os.environ['USERPROFILE'] + "\\Pictures\\cubes.jpg" twitter.update_with_media(image, "Tweet with media using #tweepy") And this is the media tweet. Tweet with media using #tweepy pic.twitter.com/9bDuw9DDJI— Durga Swaroop Perla (@durgaswaroop) December 24, 2017 When you run the previous commands, you'll see that there is a lot of output that is printed on the terminal. This is a status object with a lot of useful data like the number of followers you got, your profile picture URL, your location etc., pretty much everything you get from your twitter page. We can make use of this information, If we are building something more comprehensive. Apart from sending regular tweets, you can also reply to existing tweets. To reply to a tweet you'd first need its tweet_id which you can get from the tweet's URL. For example the URL for previous tweet is and the tweet_id is 945049796238118912. Using that id, we can send another tweet as reply. id_of_tweet_to_reply = "945049796238118912" twitter.update_status("Reply to a tweet using #tweepy", in_reply_to_status_id=id_of_tweet_to_reply) The only change in the syntax is in_reply_to_status_id=id_of_tweet_to_reply that is passed as the second argument. And with that our new tweet will be added as reply to the original tweet. The new reply tweet is this: Reply to a tweet using #tweepy— Durga Swaroop Perla (@durgaswaroop) December 24, 2017 That's how easy it is to access Twitter API with tweepy. We now know how to tweet and how to reply to a tweet. Building up from this knowledge, In a later tutorial, I can show you how to create your own twitter chat-bot and also twitter streaming analysis. The full code of things covered in this article is available as gist at For more programming and Python articles, checkout Freblogg and Freblogg/Python Web Scraping For Beginner with Python This is the third article as part of my twitter challenge #30DaysOfBlogging. Twenty-seven. Please Enter your comment here......
http://www.freblogg.com/2017/12/tweeting-with-python-and-tweepy.html
CC-MAIN-2020-40
en
refinedweb
Microsoft SQL Server Training Classes in Terre Haute, Indiana Learn Microsoft SQL Server in Terre Haute, Indiana and surrounding areas via our hands-on, expert led courses. All of our classes either are offered on an onsite, online or public instructor led basis. Here is a list of our current Microsoft SQL Server related training offerings in Terre Haute, Indiana: - Docker 19 October, 2020 - 21 October, 2020 - Linux Kernel Debugging and Security (LFD440) 21 September, 2020 - 24 September, 2020 - Microsoft Office Excel 2019: Part 1 6 October, 2020 - 6 October, 2020 - Enterprise Linux System Administration 2 November, 2020 - 6 November, 2020 - See our complete public course listing Blog Entries publications that: entertain, make you think, offer insight Invoking an external command in Python is a two step process: from subprocess import call call(["ls","…
https://www.hartmannsoftware.com/Training/MSDB/Terre-Haute-Indiana
CC-MAIN-2020-40
en
refinedweb
. Android provides standard accessibility services, including TalkBack, and developers can create and distribute their own services. This document explains the basics of building an accessibility service. Note: Your app should use platform-level accessibility services only for the purpose of helping users with disabilities interact with your app. The ability for you to build and deploy accessibility services was introduced with Android 1.6 (API Level 4) and received significant improvements with Android 4.0 (API Level 14). The Android Support Library was also updated with the release of Android 4.0 to provide support for these enhanced accessibility features back to Android 1.6. Developers aiming for widely compatible accessibility services are encouraged to use the Support Library and develop for the more advanced accessibility features introduced in Android 4. Manifest declarations and permissions Applications that provide accessibility services must include specific declarations in their application manifests to be treated as an accessibility service by the Android system. This section explains the required and optional settings for accessibility services. Accessibility service declaration In order to be treated as an accessibility service, you must include a service element (rather than the activity element) within the application element in your manifest. In addition, within the service element, you must also include an accessibility service intent filter. For compatiblity with Android 4.1 and higher, the manifest must also protect the service by adding the BIND_ACCESSIBILITY_SERVICE permission to ensure that only the system can bind to it. Here's an example: <application> <service android: <intent-filter> <action android: </intent-filter> </service> </application> These declarations are required for all accessibility services deployed on Android 1.6 (API Level 4) or higher. Accessibility service configuration Accessibility services must also provide a configuration which specifies the types of accessibility events that the service handles and additional information about the service. The configuration of an accessibility service is contained in the AccessibilityServiceInfo class. Your service can build and set a configuration using an instance of this class and setServiceInfo() at runtime. However, not all configuration options are available using this method. Beginning with Android 4.0, you can include a <meta-data> element in your manifest with a reference to a configuration file, which allows you to set the full range of options for your accessibility service, as shown in the following example: <service android: ... <meta-data android: </service> This meta-data element refers to an XML file that you create in your application's resource directory ( <project_dir>/res/xml/accessibility_service_config.xml). The following code shows example contents for the service configuration file: <accessibility-service xmlns: For more information about the XML attributes which can be used in the accessibility service configuration file, follow these links to the reference documentation: android:description android:packageNames android:accessibilityEventTypes android:accessibilityFlags android:accessibilityFeedbackType android:notificationTimeout android:canRetrieveWindowContent android:settingsActivity For more information about which configuration settings can be dynamically set at runtime, see the AccessibilityServiceInfo reference documentation. methods An accessibility service must extend the AccessibilityService class and override the following methods from that class. These methods are presented in the order in which they are called by the Android system, from when the service is started ( onServiceConnected()), while it is running ( onAccessibilityEvent(), onInterrupt()) to when it is shut down ( onUnbind()). onServiceConnected()- (optional) This system calls this method when it successfully connects to your accessibility service. Use this method to do any one-time setup steps for your service, including connecting to user feedback system services, such as the audio manager or device vibrator. If you want to set the configuration of your service at runtime or make one-time adjustments, this is a convenient location from which to call setServiceInfo(). onAccessibilityEvent()- (required) This method is called back by the system when it detects an AccessibilityEventthat matches the event filtering parameters specified by your accessibility service. For example, when the user clicks a button or focuses on a user interface control in an application for which your accessibility service is providing feedback. When this happens, the system calls this method, passing the associated AccessibilityEvent, which the service can then interpret and use to provide feedback to the user. This method may be called many times over the lifecycle of your service. onInterrupt()- (required) This method is called when the system wants to interrupt the feedback your service is providing, usually in response to a user action such as moving focus to a different control. This method may be called many times over the lifecycle of your service. onUnbind()- (optional) This method is called when the system is about to shutdown the accessibility service. Use this method to do any one-time shutdown procedures, including de-allocating user feedback system services, such as the audio manager or device vibrator. These callback methods provide the basic structure for your accessibility service. It is up to you to decide on how to process data provided by the Android system in the form of AccessibilityEvent objects and provide feedback to the user. For more information about getting information from an accessibility event, see the Get event details. Register for accessibility events One of the most important functions of the accessibility service configuration parameters is to allow you to specify what types of accessibility events your service can handle. Being able to specify this information enables accessibility services to cooperate with each other, and allows you as a developer the flexibility to handle only specific events types from specific applications. The event filtering can include the following criteria: - Package Names - Specify the package names of applications whose accessibility events you want your service to handle. If this parameter is omitted, your accessibility service is considered available to service accessibility events for any application. This parameter can be set in the accessibility service configuration files with the android:packageNamesattribute as a comma-separated list, or set using the AccessibilityServiceInfo.packageNamesmember. - Event Types - Specify the types of accessibility events you want your service to handle. This parameter can be set in the accessibility service configuration files with the android:accessibilityEventTypesattribute as a list separated by the |character (for example accessibilityEventTypes="typeViewClicked|typeViewFocused"), or set using the AccessibilityServiceInfo.eventTypesmember. When setting up your accessibility service, carefully consider what events your service is able to handle and only register for those events. Since users can activate more than one accessibility services at a time, your service must not consume events that it action for users Starting with Android 4.0 (API Level 14), accessibility services can act on behalf of users, including changing the input focus and selecting (activating) user interface elements. In Android 4.1 (API Level 16) the range of actions has been expanded to include scrolling lists and interacting with text fields. Accessibility services can also take global actions, such as navigating to the Home screen, pressing the Back button, opening the notifications screen and recent applications list. Android 4.1 also includes a new type of focus, Accessibilty Focus, which makes all visible elements selectable by an accessibility service. These new capabilities make it possible for developers of accessibility services to create alternative navigation modes such as gesture navigation, and give users with disabilities improved control of their Android devices. Listen for gestures Accessibility services can listen for specific gestures and respond by taking action on behalf of a user. This feature, added in Android 4.1 (API Level 16), and requires that your accessibility service request activation of the Explore by Touch feature. Your service can request this activation by setting the flags member of the service's AccessibilityServiceInfo instance to FLAG_REQUEST_TOUCH_EXPLORATION_MODE, as shown in the following example. accessibility actions Accessibility services can take action on behalf of users to make interacting with applications simpler and more productive. The ability of accessibility services to perform actions was added in Android 4.0 (API Level 14) and significantly expanded with Android 4.1 (API Level 16). In order to take actions on behalf of users, your accessibility service must register to receive events from a few or many applications and request permission to view the content of applications by setting the android:canRetrieveWindowContent to true in the service configuration file. When events are received by your service, it can then retrieve the AccessibilityNodeInfo object from the event using getSource(). With the AccessibilityNodeInfo object, your service can then explore the view hierarchy to determine what action to take and then act for the user using performAction(). public class MyAccessibilityService extends AccessibilityService { @Override public void onAccessibilityEvent(AccessibilityEvent event) { // get the source node of the event AccessibilityNodeInfo nodeInfo = event.getSource(); // Use the event and node information to determine // what action to take // take action on behalf of the user nodeInfo.performAction(AccessibilityNodeInfo.ACTION_SCROLL_FORWARD); // recycle the nodeInfo object nodeInfo.recycle(); } ... } The performAction() method allows your service to take action within an application. If your service needs to perform a global action such as navigating to the Home screen, pressing the Back button, opening the notifications screen or recent applications list, then use the performGlobalAction() method. Use focus types Android 4.1 (API Level 16) introduces a new type of user interface focus called Accessibility Focus. Accessibility services can used this type of focus to select any visible user interface element and act on it. This focus type is different from the more well known Input Focus, which determines what on-screen user interface element receives input when a user types characters, presses Enter on a keyboard or pushes the center button of a D-pad control. Accessibility Focus is completely separate and independent from Input Focus. In fact, it is possible for one element in a user interface to have Input Focus while another element has Accessibility Focus. The purpose of Accessibility Focus is to provide accessibility services with a method of interacting with any visible element on a screen, regardless of whether or not the element is input-focusable from a system perspective. You can see accessibility focus in action by testing accessibility gestures. For more information about testing this feature, see Testing gesture navigation. Note: Accessibility services that use Accessibility Focus are responsible for synchronizing the current Input Focus when an element is capable of this type of focus. Services that don't synchronize Input Focus with Accessibility Focus run the risk of causing problems in applications that expect input focus to be in a specific location when certain actions are taken. An accessibility service can determine what user interface element has Input Focus or Accessibility Focus using the AccessibilityNodeInfo.findFocus() method. You can also search for elements that can be selected with Input Focus using the focusSearch() method. Finally, your accessibility service can set Accessibility Focus using the performAction(AccessibilityNodeInfo.ACTION_SET_ACCESSIBILITY_FOCUS) method. Gather information Accessibility services also have standard methods of gathering and representing key units of user-provided information, such as event details, text, and numbers. Get event details The Android system provides information to accessibility services about the user interface interaction through AccessibilityEvent objects. Prior to Android 4.0, the information available in an accessibility event, while providing a significant amount of detail about a user interface control selected by the user, offered limited contextual information. In many cases, this missing context information might be critical to understanding the meaning of the selected control. An example of an interface where context is critical is a calendar or day planner. If the user selects a 4:00 PM time slot in a Monday to Friday day list and the accessibility service announces “4 PM”, but doesn't announce the weekday name, the day of the month, or the month name, the resulting feedback is confusing. In this case, the context of a user interface control is critical to a user who wants to schedule a meeting. Android 4.0 significantly extends the amount of information that an accessibility service can obtain about an user interface interaction by composing accessibility events based on the view hierarchy. A view hierarchy is the set of user interface components that contain the component (its parents) and the user interface elements that may be contained by that component (its children). In this way, the Android system can provide much richer detail about accessibility events, allowing accessibility services to provide more useful feedback to users. An accessibility service gets information about an user interface event through an AccessibilityEvent passed by the system to the service's onAccessibilityEvent() callback method. This object provides details about the event, including the type of object being acted upon, its descriptive text and other details. Starting in Android 4.0 (and supported in previous releases through the AccessibilityEventCompat object in the Support Library), you can obtain additional information about the event using these calls: AccessibilityEvent.getRecordCount()and getRecord(int)- These methods allow you to retrieve the set of AccessibilityRecordobjects which contributed to the AccessibilityEventpassed to you by the system. This level of detail provides more context for the event that triggered your accessibility service. AccessibilityEvent.getSource()- This method returns an AccessibilityNodeInfoobject. This object allows you to request view layout hierarchy (parents and children) of the component that originated the accessibility event. This feature allows an accessibility service to investigate the full context of an event, including the content and state of any enclosing views or child views. Important: The ability to investigate the view hierarchy from an AccessibilityEventpotentially exposes private user information to your accessibility service. For this reason, your service must request this level of access through the accessibility service configuration XML file, by including the canRetrieveWindowContentattribute and setting it to true. If you don't include this setting in your service configuration xml file, calls to getSource()fail. Note: In Android 4.1 (API Level 16) and higher, the getSource()method, as well as AccessibilityNodeInfo.getChild()and getParent(), return only view objects that are considered important for accessibility (views that draw content or respond to user actions). If your service requires all views, it can request them by setting the flagsmember of the service's AccessibilityServiceInfoinstance to FLAG_INCLUDE_NOT_IMPORTANT_VIEWS.:
https://developer.android.com/guide/topics/ui/accessibility/service?hl=ru
CC-MAIN-2020-40
en
refinedweb
We value your feedback. Take our survey and automatically be enter to win anyone of the following: Yeti Cooler, Amazon eGift Card, and Movie eGift Card! Become a Premium Member and unlock a new, free course in leading technologies each month. public static int CurrentLocaleId { get { //return CurrentCulture.LCID; if (System.Web.HttpContext.Current.Session["LCID"] == null) { return CurrentCulture.LCID; } else { return (int)System.Web.HttpContext.Current.Session["LCID"]; } } } Add your voice to the tech community where 5M+ people just like you are talking about what matters. If you are experiencing a similar issue, please ask a related question Join the community of 500,000 technology professionals and ask your questions.
https://www.experts-exchange.com/questions/23775147/Why-do-I-get-Object-Null-reference-here.html
CC-MAIN-2017-30
en
refinedweb
Here is a listing of tough C questions on “Line Input & Output” along with answers, explanations and/or solutions: 1. What is the size of array “line” used in fgets(line, maxline, *fp) function? a) maxline – 1 b) maxline c) maxline + 1 d) Size is dynamic View Answer 2. The following function int fputs(char *line, FILE *fp) returns EOF when: a) ‘�’ character of array line is encountered b) ‘n’ character in array line is encountered c) ‘t’ character in array line is encountered d) When an error occurs View Answer 3. Identify X library function for line input and output? #include <stdio.h> int X(char *s, FILE *iop) { int c; while (c = *s++) putc(c, iop); return ferror(iop) ? EOF : 0; } a) getc b) putc c) fgets d) fputs View Answer 4. Which function has a return type as char pointer? a) getline b) fputs c) fgets d) All of the mentioned View Answer 5. Which of the following is the right declaration for fgets inside the library? a) int *fgets(char *line, int maxline, FILE *fp); b) char *fgets(char *line, int maxline, FILE *fp); c) char *fgets(char *line, FILE *fp); d) int *fgets(char *line, FILE *fp); View Answer 6. Which is true about fputs.fputs returns? a) EOF if an error occurs b) Non-negative if no error c) Both a & b d) None of the mentioned View Answer 7. gets and puts operate on a) stdin and stdout b) files c) stderr d) Nothing View Answer 8. gets does the following when it reads from stdin a) Deletes the ‘t’ b) Puts adds it. c) Deletes the terminating ‘n’ d) Nothing View Answer Sanfoundry Global Education & Learning Series – C Programming Language. Here’s the list of Best Reference Books in C Programming Language. To practice all features of C programming language, here is complete set of 1000+ Multiple Choice Questions and Answers on C.
http://www.sanfoundry.com/tough-c-questions-line-input-output/
CC-MAIN-2017-30
en
refinedweb
Author: harsh Date: Sat Nov 19 21:22:22 2011 New Revision: 1204077 URL: Log: HADOOP-7297. Remove docs for CN and BN, as they aren't present. (harsh) Modified: hadoop/common/branches/branch-0.20-security/CHANGES.txt hadoop/common/branches/branch-0.20-security/src/docs/src/documentation/content/xdocs/hdfs_user_guide.xml Modified: hadoop/common/branches/branch-0.20-security/CHANGES.txt URL: ============================================================================== --- hadoop/common/branches/branch-0.20-security/CHANGES.txt (original) +++ hadoop/common/branches/branch-0.20-security/CHANGES.txt Sat Nov 19 21:22:22 2011 @@ -37,6 +37,8 @@ Release 0.20.206.0 - unreleased MAPREDUCE-3343. TaskTracker Out of Memory because of distributed cache. (Zhao Yunjiong). + + HADOOP-7297. Remove docs for CN and BN, as they aren't present. (harsh) IMPROVEMENTS Modified: hadoop/common/branches/branch-0.20-security/src/docs/src/documentation/content/xdocs/hdfs_user_guide.xml URL: ============================================================================== --- hadoop/common/branches/branch-0.20-security/src/docs/src/documentation/content/xdocs/hdfs_user_guide.xml (original) +++ hadoop/common/branches/branch-0.20-security/src/docs/src/documentation/content/xdocs/hdfs_user_guide.xml Sat Nov 19 21:22:22 2011 @@ -112,25 +112,9 @@ problems. </li> <li> - Secondary NameNode (deprecated): performs periodic checkpoints of the + Secondary NameNode: performs periodic checkpoints of the namespace and helps keep the size of file containing log of HDFS modifications within certain limits at the NameNode. - Replaced by Checkpoint node. - </li> - <li> - Checkpoint node: performs periodic checkpoints of the namespace and - helps minimize the size of the log stored at the NameNode - containing changes to the HDFS. - Replaces the role previously filled by the Secondary NameNode. - NameNode allows multiple Checkpoint nodes simultaneously, - as long as there are no Backup nodes registered with the system. - </li> - <li> -. </li> </ul> </li> @@ -232,12 +216,6 @@ </section> <section> <title>Secondary NameNode</title> - <note> - The Secondary NameNode has been deprecated. - Instead, consider using the - <a href="hdfs_user_guide.html#Checkpoint+Node">Checkpoint Node</a> or - <a href="hdfs_user_guide.html#Backup+Node">Backup Node</a>. - </note> <p> The NameNode stores modifications to the file system as a log appended to a native file system file, <code>edits</code>. @@ -284,114 +262,6 @@ For command usage, see <a href="commands_manual.html#secondarynamenode">secondarynamenode</a>. </p> - - </section><section> <title> Checkpoint Node </title> - <p>NameNode persists its namespace using two files: <code>fsimage</code>, - which is the latest checkpoint of the namespace and <code>edits</code>, - a journal (log) of changes to the namespace since the checkpoint. - When a NameNode starts up, it merges the <code>fsimage</code> and - <code>edits</code> journal to provide an up-to-date view of the - file system metadata. - The NameNode then overwrites <code>fsimage</code> with the new HDFS state - and begins a new <code>edits</code> journal. - </p> - <p> - The Checkpoint node periodically creates checkpoints of the namespace. - It downloads <code>fsimage</code> and <code>edits</code> - <code>bin/hdfs namenode -checkpoint</code> on the node - specified in the configuration file. - </p> - <p>The location of the Checkpoint (or Backup) node and its accompanying - web interface are configured via the <code>dfs.backup.address</code> - and <code>dfs.backup.http.address</code> configuration variables. - </p> - <p> - The start of the checkpoint process on the Checkpoint node is - controlled by two configuration parameters. - </p> - <ul> - <li> - <code>fs.checkpoint.period</code>, set to 1 hour by default, specifies - the maximum delay between two consecutive checkpoints - </li> - <li> - <code>fs.checkpoint.size</code>, set to 64MB by default, defines the - size of the edits log file that forces an urgent checkpoint even if - the maximum checkpoint delay is not reached. - </li> - </ul> - <p> - The Checkpoint node stores the latest checkpoint in a - directory that is structured the same as the NameNode's - directory. This allows the checkpointed image to be always available for - reading by the NameNode if necessary. - See <a href="hdfs_user_guide.html#Import+Checkpoint">Import Checkpoint</a>. - </p> - <p>Multiple checkpoint nodes may be specified in the cluster configuration file.</p> - <p> - For command usage, see - <a href="commands_manual.html#namenode">namenode</a>. - </p> - </section> - - <section> <title> Backup Node </title> - <p> -. - </p> - <p> - The Backup node does not need to download - <code>fsimage</code> and <code>edits</code> <code>fsimage</code> file and reset - <code>edits</code>. - </p> - <p> - As the Backup node maintains a copy of the - namespace in memory, its RAM requirements are the same as the NameNode. - </p> - <p> - The NameNode supports one Backup node at a time. No Checkpoint nodes may be - registered if a Backup node is in use. Using multiple Backup nodes - concurrently will be supported in the future. - </p> - <p> - The Backup node is configured in the same manner as the Checkpoint node. - It is started with <code>bin/hdfs namenode -checkpoint</code>. - </p> - <p>The location of the Backup (or Checkpoint) node and its accompanying - web interface are configured via the <code>dfs.backup.address</code> - and <code>dfs.backup.http.address</code> configuration variables. - </p> - <p> - Use of a Backup node provides the option of running the NameNode with no - persistent storage, delegating all responsibility for persisting the state - of the namespace to the Backup node. - To do this, start the NameNode with the - <code>-importCheckpoint</code> option, along with specifying no persistent - storage directories of type edits <code>dfs.name.edits.dir</code> - for the NameNode configuration. - </p> - <p> - For a complete discussion of the motivation behind the creation of the - Backup node and Checkpoint node, see - <a href="">HADOOP-4539</a>. - For command usage, see - <a href="commands_manual.html#namenode">namenode</a>. - </p> </section> <section> <title> Import Checkpoint </title>
http://mail-archives.apache.org/mod_mbox/hadoop-common-commits/201111.mbox/%3C20111119212223.78213238899C@eris.apache.org%3E
CC-MAIN-2017-30
en
refinedweb
I have an Apache Spark cluster and a RabbitMQ broker and I want to consume messages and compute some metrics using the pyspark.streaming StreamingContext This solution uses pika asynchronous consumer example and socketTextStream method from Spark Streaming .pyfile Consumerclass Under if __name__ == '__main__': we need to open a socket with the HOST and PORT corresponding to your TCP connection to Spark Streaming. We must save the method sendall from socket into a variable pass this to the Consumer class with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s: s.bind((HOST, PORT)) s.listen(1) conn, addr = s.accept() dispatcher = conn.sendall #assigning sendall to dispatcher variable consumer = Consumer(dispatcher) try: consumer.run() except Exception as e: consumer.stop() s.close() Modify the __init__ method in Consumer to pass the dispatcher def __init__(self,dispatcher): self._connection = None self._channel = None self._closing = False self._consumer_tag = None self._url = amqp_url #new code self._dispatcher = dispatcher In the method on_message inside the Consumer we call self._dispatcher to send the body of the AMQP message def on_message(self, unused_channel, basic_deliver, properties, body): self._channel.basic_ack(basic_deliver.delivery_tag) try: # we need an '\n' at the each row Spark socketTextStream self._dispatcher(bytes(body.decode("utf-8")+'\n',"utf-8")) except Exception as e: raise In Spark, put ssc.socketTextStream(HOST, int(PORT)) with HOST and PORT corresponding to our TCP socket. Spark will manage the connection Run first the consumer and then the Spark application Final remarks:
https://codedump.io/share/IesHfDnumqK1/1/how-to-implement-a-rabbitmq-consumer-using-pyspark-streaming-module
CC-MAIN-2017-30
en
refinedweb
Download presentation Presentation is loading. Please wait. Published byLynne Jones Modified over 4 years ago 2 Many algorithms appear over and over again, in program after program. These are called standard algorithms You are required to know about 5 of these algorithms: Input Validation (Int 2) Linear search Counting occurences Finding the maximum value Finding the minimum value 3 Standard Algorithms Input validation checks user input to see if it’s within a valid number range. If it’s outside the valid range the algorithm asks for the number again until it falls within the acceptable range. What is Input Validation 4 Standard Algorithms Input Validation Pseudocode 1Get and store value 2Loop WHILE data is out with range 3 Display error message 4 Prompt user to re-enter value 5End loop 5 Standard Algorithms Input Validation VB6.0 code Number = InputBox (“Enter number between 1 and 10”) Do While number max MsgBox (“Must be num between “ & min & ” and “ & max) Number = InputBox (“Enter number between “ & min & ” and “ & max) Loop 6 Standard Algorithms What is Linear Search? Linear search is the simplest search method to implement and understand. Starting with an array holding 8 numbers with a pointer indicating the first item, the user inputs a search key. Scanning then takes place from left to right until the search key is found, if it exists in the list. 7 Standard Algorithms Linear Search (The search item is 76) Each item is checked to see if it 76, until 76 is found or the last item is checked. 8 1. Set found to false 2. Get search value 3. Start at first element on the list 4. Do while (not end of list) AND (found is false) 5.If current element = search value Then 6.Set found = true 7.Display found message 8.Else 9.Move to next element in the list 10.End If 11. Loop 12. If found = false Then 13.Display not found message 14. End If Standard Algorithms Linear Search Pseudocode 9 IsFound = false SearchValue = InputBox(“Please enter the value your are looking for”) Position = 0 Do While (Position <> UBound(List())) AND (found = false) If List(Position) = SearchValue Then IsFound = true MsgBox(“The item is at position ” & Position & “in the list”) Else Position = Position + 1 End If Loop If found = false Then MsgBox(“The item is not in the list”) End If Standard Algorithms Linear Search VB6.0 Code 10 Standard Algorithms What is Counting Occurrences Programs often have to count occurrences. Examples include counting the number of: students who achieved particular marks in an exam rainfall measurements greater than a particular level words equal to a given search value in a text file. The basic mechanism is simple: 1.a counter is set to 0 2. a list is searched for the occurrence of the search value 3. every time the search value occurs, the counter is incremented 11 Standard Algorithms Counting Occurrences Algorithm 1.Set counter = 0 2.Get search value 3.Set pointer to start of the list 4.Do 5.If search item = list(position) Then 6.Add 1 to counter 7.End If 8.Move to next position 9.Until end of list 10.Display number of occurrences 12 Standard Algorithms Counting Occurrences VB6.0 Code Count = 0 Occurrence = Inputbox(“Please enter value to count”) Position = 0 Do If List(Position) = Occurrence Then Counter = Counter + 1 End If Position = Position + 1 Loop Until Position = UBound(List()) 13 Standard Algorithms Maximum And Minimum Algorithms Computers are often used to find maximum and minimum values in a list. For example, a spreadsheet containing running times for videos might make use of a maximum algorithm to identify the video with the longest running time, or a minimum algorithm to identify the shortest running time. To find a maximum, we set up a variable which will hold the value of the largest item that has been found so far, usually the first element. If an element in the array exceeds this working maximum, we give the working maximum that value. 14 Standard Algorithms Maximum Pseudocode 1.Set maximum value to first item in this list 2.Set current position to 1 3.Do 4.If list(position) > maximum Then 5.set maximum equal to list(position) 6.End If 7.Move to next position 8.Until end of list 9.Display maximum value 15 Standard Algorithms Finding the Maximum VB6.0 Code Maximum = List(0) Position = 1 Do If List(Position) > Maximum Then Maximum = List(Position) End If Position = Position + 1 Loop Until Position = UBound(List()) Msgbox(“The maximum value is ” & Maximum) 16 Standard Algorithms Minimum Pseudocode 1.Set minimum value to first item in ths list 2.Set current position to 1 3.Do 4.If list(position) < minimum Then 5.set minimum equal to list(position) 6.End If 7.Move to next position 8.Until end of list 9.Display minimum value 17 Standard Algorithms Finding the Minimum VB6.0 Code Minimum = List(0) Position = 1 Do If List(Position) < Minimum Then Minimum = List(Position) End If Position = Position + 1 Loop Until Position = UBound(List()) Msgbox(“The minimum value is ” & Minimum) 18 Standard Algorithm Exam Questions An international athletics competition between eight countries has a number of events. The winning times are stored in a list in order of lane number like the one on the right. The stadium needs a program to help process the results. Q1.The program must find the fastest time for a race. Use pseudocode to design an algorithm to find the fastest time (4 marks) Q2. It is suggested that algorithm should find the lane number of the fastest time instead of the fastest time. Explain how this could be achieved. (1 mark) Lane Time (secs) 140.23 241.05 342.88 439.89 540.55 640.01 739.87 19 Exam Marking Scheme Set fastest to first time in list For rest of array items If array(current)<fastest then Set fastest to array(current) End if End loop In summary, 1 mark for each of the following: Setting initial value Loop (with end) for traversal of array Comparison of current element with maximum value (with end if) Assignment of new maximum value 20 Standard Algorithm Exam Questions NoTow is a company that runs a city centre car park. The company requires a piece of software that will calculate the number of cars on a particular day that spent more than three hours in the car park. The number of whole minutes each car is parked is stored in a list as shown on the right. Q3. Use pseudocode to design an algorithm to carry out this calculation (4 marks). 124 210 105 193 157 21 Exam Marking Scheme Set over3 = 0 For each car that day If duration >180 then Add one to over3 End if End loop 1 mark for initialising 1 mark loop with termination 1 mark for if..endif with correct condition 1 mark for keeping running total Note: End of if/loop may be implicit in clearly indented algorithm The value is in minutes so the condition is > 180 Similar presentations © 2020 SlidePlayer.com Inc.
https://slideplayer.com/slide/4776144/
CC-MAIN-2020-10
en
refinedweb
Coding the Driver In addition to coding the individual states as classes, you also need to create a driver for the class. Like the states, the driver is derived from a class in the framework. Here’s a sample driver; first, the class definition: class Calculator : public FSMDriver { public: queue<CalcStackItem *> PostFixQueue; stack<char> OperatorStack; FSMState<Calculator> *start; Calculator(); void Run(char *chars, FSMState<Calculator> *StartState); double Perform(); }; This class includes special members for the particular state machine I’m building, a calculator. It also includes an overloaded Run method that actually runs the state machine (although all it really does is call the Run method in the base class and catch exceptions for invalid states). Finally, it includes a Perform method that does the final calculation. Of course, I’m only showing the declarations for the functions. Listing 1 shows the entire code for the Calculator state machine; after that, I’ll show the base classes that make this all work. Listing 1 The custom state and driver classes. #include <string> #include <iostream> #include "fsm.hpp" #include <stack> #include <queue> #include <exception> #include <math.h> using namespace std; struct CalcStackItem { enum {number, op} itemtype; double numbervalue; char opchar; CalcStackItem(double anumbervalue) { itemtype = number; numbervalue = anumbervalue; } CalcStackItem(char aopchar) { itemtype = op; opchar = aopchar; } }; // Your own driver class that contains // data available to the state objects. class Calculator : public FSMDriver { public: queue<CalcStackItem *> PostFixQueue; stack<char> OperatorStack; FSMState<Calculator> *start; Calculator(); void Run(char *chars, FSMState<Calculator> *StartState); double Perform(); }; // State: Start class StartState : public FSMState<Calculator> { public: StartState(Calculator *adriver) : FSMState<Calculator>(adriver, "StartState") { } void OnEntry() { cout << driver->getCharString() << endl; } FSMStateBase *HandleEvent() { if (isIn("01234567890.")) { return driver->getState("NumberEntryState"); } } }; // State: NumberEntry class NumberEntryState : public FSMState<Calculator> { protected: string accum; public: NumberEntryState(Calculator *adriver) : FSMState<Calculator>(adriver, "NumberEntryState") { } void OnEntry() { Append(); } FSMStateBase *HandleEvent() { if (isIn("01234567890.Ee")) { Append(); return this; } else if (isIn("+-*/")) { ParseAndPush(); return driver->getState("OperatorState"); } else if (isIn("=")) { ParseAndPush(); return driver->getState("EqualsState"); } else { return driver->getState("ErrorState"); } } void Append() { accum += Current; } void ParseAndPush() { double x = strtod(accum.c_str(), NULL); accum = ""; driver->PostFixQueue.push(new CalcStackItem(x)); } }; // State: Operator class OperatorState : public FSMState<Calculator> { protected: string accum; map<char,int> OpOrder; public: OperatorState(Calculator *adriver) : FSMState<Calculator>(adriver, "OperatorState") { OpOrder[’+’] = 0; OpOrder[’-’] = 0; OpOrder[’*’] = 1; OpOrder[’/’] = 1; } void OnEntry() { if (driver->OperatorStack.size() == 0) { driver->OperatorStack.push(Current); } else { while (driver->OperatorStack.size() > 0 && (OpOrder[driver->OperatorStack.top()]) >= (OpOrder[Current])) { driver->PostFixQueue.push( new CalcStackItem(driver->OperatorStack.top())); driver->OperatorStack.pop(); } driver->OperatorStack.push(Current); } } FSMStateBase *HandleEvent() { if (isIn("01234567890.")) { return driver->getState("NumberEntryState"); } else { return driver->getState("ErrorState"); } } }; // State: Equals class EqualsState : public FSMState<Calculator> { public: EqualsState(Calculator *adriver) : FSMState<Calculator>(adriver, "EqualsState") { } void OnEntry() { // Finish up grabbing the operators while (driver->OperatorStack.size() > 0) { char op = driver->OperatorStack.top(); driver->OperatorStack.pop(); driver->PostFixQueue.push(new CalcStackItem(op)); } cout << driver->Perform() << endl; driver->OperatorStack.empty(); driver->PostFixQueue.empty(); } FSMStateBase *HandleEvent() { if (isIn("01234567890.")) { return driver->getState("NumberEntryState"); } else { return driver->getState("ErrorState"); } } }; // State: Error class ErrorState : public FSMState<Calculator> { public: ErrorState(Calculator *adriver) : FSMState<Calculator>(adriver, "ErrorState") { } void OnEntry() { cout << " Syntax Error" << endl; } FSMStateBase *HandleEvent() { return NULL; } }; Calculator::Calculator() : FSMDriver() { start = new StartState(this); new NumberEntryState(this); new OperatorState(this); new EqualsState(this); new ErrorState(this); } void Calculator::Run(char *chars, FSMState<Calculator> *StartState) { try { FSMDriver::Run(chars, StartState); } catch (StateNameException e) { cout << "Invalid state name: " << e.getMessage() << endl; } } double Calculator::Perform() { stack<double> holder; CalcStackItem *item; double result; while (PostFixQueue.size() > 0) { item = PostFixQueue.front(); PostFixQueue.pop(); if (item->itemtype == CalcStackItem::number) { holder.push(item->numbervalue); } else { double second = holder.top(); holder.pop(); double first = holder.top(); holder.pop(); switch (item->opchar) { case ’+’: result = first + second; break; case ’-’: result = first - second; break; case ’*’: result = first * second; break; case ’/’: result = first / second; break; } holder.push(result); } } result = holder.top(); return result; } int main(int argc, char *argv[]) { Calculator calc; calc.Run("10+5-3*2=2*3*4=", calc.start); cout << "===============" << endl; calc.Run("2*2+3*4=", calc.start); cout << "===============" << endl; calc.Run("1+2q", calc.start); cout << "===============" << endl; calc.Run("6.02e23/153.25=", calc.start); cout << "===============" << endl; return 0; } The classes in this file include those that I already mentioned, plus a few other state class that I described in the earlier table. Now for the framework. This code is actually not very long, but is divided between a header file and a code file. Listing 2 shows the header file. Listing 2 The state machine header file, fsm.hpp. #include <string> #include <map> class StateNameException { protected: std::string msg; public: explicit StateNameException(const std::string& amsg) : msg(amsg) {}; std::string getMessage() const { return msg; } }; class FSMStateBase; class FSMDriver { private: std::map<char *, FSMStateBase *> States; protected: char Current; char *CharString; FSMStateBase *CurrentState; public: ~FSMDriver(); void addState(FSMStateBase *state, char *name) { States[name] = state; } char *getCharString() { return CharString; } virtual void Run(char *chars, FSMStateBase *StartState); FSMStateBase *getState(char *name) { FSMStateBase *res = States[name]; if (res != NULL) { return res; } else { throw StateNameException(name); } } }; class FSMStateBase { protected: friend class FSMDriver; FSMStateBase() {} virtual void OnEntry() = 0; virtual FSMStateBase *HandleEvent() = 0; char Current; }; template <typename T> class FSMState : public FSMStateBase { protected: T *driver; char *name; friend class FSMDriver; FSMState(T *adriver, char *aname) : name(aname), driver(adriver) { driver->addState(this, name); } bool isIn(char* str) { return (memchr (str, Current, strlen(str)) >= str); } public: virtual void OnEntry() {} virtual FSMStateBase *HandleEvent() {} }; Listing 3 shows the code file, which contains a couple of the methods for the FSMDriver class: Listing 3 The code for the state machine, fsm.cpp. #include "fsm.hpp" FSMDriver::~FSMDriver() { // Collect the garbage std::map<char *, FSMStateBase *>::iterator iter = States.begin(); while (iter != States.end()) { FSMStateBase *state = iter->second; delete state; iter++; } } void FSMDriver::Run(char *chars, FSMStateBase *StartState) { int i; int len = strlen(chars); CharString = chars; FSMStateBase *NextState; CurrentState = StartState; StartState->OnEntry(); for (i=0; i < len; i++) { Current = chars[i]; CurrentState->Current = Current; NextState = CurrentState->HandleEvent(); if (NextState == NULL) { return; } if (NextState != CurrentState) { CurrentState = NextState; CurrentState->Current = Current; CurrentState->OnEntry(); } } } The header file contains four classes: - StateNameException is a small exception class. When the driver encounters a name of a state that doesn’t exist, it throws an exception of this class. - FSMDriver is the main driver class. To use it, your best bet is to derive a new class from it, as I did in the preceding example; then call addState, passing instances of your state classes. Finally, call its Run method, passing the string of events along with the starting class. - FSMStateBase is the base class for all states. It actually serves as a "technicality" because I wanted to create my state class as a template, wherein the template parameter is the driver class. But the driver class contains a map of the state classes, which makes a bit of a circular chicken-and-egg problem. To fix the problem, I started with a non-template base class for the states, FSMStateBase. The driver uses this base class for its members, and the template class is derived from this class, thus solving the problem. - FSMState is the base of all the states; as I mentioned, it’s a template that’s derived from the FSMStateBase class. This class includes a pointer to the driver class. Before moving on, I want to explain just a tad more about the FSMState class being a template class. The reason I made it a template is so that when you derive your classes from it, you can specify your own driver class, thus alleviating the need to do a dangerous downcast. What do I mean by that? Suppose I didn’t use the templates, and instead had two classes like this: class FSMDriver2 { public: int value; }; class FSMState2 { public: FSMDriver2 *driver; }; Now suppose that you derive a class from each of these—your own driver class and a state class: class MyDriver : public FSMDriver2 { public: int somedata; }; class MyState1 : public FSMState2 { void foo() { driver->somedata = 10; driver->value = 20; } }; Your driver class contains some data specific to your own state machine. That’s where the problem comes in. Look at how I’m trying to access the data in the state class: driver->somedata = 10; That won’t work. You’ll get a compile error because, as far as the compiler is concerned, the driver member points to an instance of the base FSMDriver2 class, even though most likely you created an instance of your derived class and stored its address in the driver variable, like so: void test() { MyDriver dr; MyState1 st; st.driver = &dr; } Really, the driver instance does have a somedata member. But the compiler doesn’t know that. To fix this, you could do a downcast, like so: class MyState1 : public FSMState2 { void foo() { ((MyDriver *)driver)->somedata = 10; } }; But that’s considered a dangerous practice because the compiler will create code that forces a cast, even if driver ends up pointing to some class other than MyDriver (that is, to some other class derived from FSMDriver2). Of course, in this code, we know what driver points to, but nevertheless, doing such a cast is considered poor coding practice. You could also use the static_cast operator in C++, but the effect will be pretty much the same. In addition to being bad coding style, such casts also make the code more cumbersome to use; frankly, it would be a pain to always have to cast the driver member each time you use it. Instead, I decided on an alternative approach—a template. If I use a template, I could simply specify the type when I create the class, and then from there easily access the members. Thus, the short example would turn into this: class FSMDriver2 { public: int value; }; template<typename T> class FSMState2 { public: T *driver; }; class MyDriver : public FSMDriver2 { public: int somedata; }; class MyState1 : public FSMState2<MyDriver> { void foo() { driver->somedata = 10; driver->value = 20; } }; void test() { MyDriver dr; MyState1 st; st.driver = &dr; } This code compiles and runs. And the cool thing is that I can access members of both my own driver class plus its base class, FSMDriver2, without having to do any casts, as you can see in the foo member function of the state class. And further, since my state class is derived from a template class but is not itself a template class, I don’t need to use templates when I put the class to use, as the test function shows. This template approach is exactly what I did in the framework. Now I want to give more details about the other template issue, where I created a base class. Continuing with these short sample classes, suppose I want to put a map inside the FSMDriver2 class containing instances of the state classes, something like this: class FSMDriver2 { public: int value; std::map<char *, FSMState2<T> *> States; }; Of course, this won’t compile, because T doesn’t mean anything inside the FSMDriver2 class. You could fiddle with the class and try it make it work, like this: template<typename T> class FSMDriver2 { public: int value; std::map<char *, FSMState2<T> *> States; }; You would just use the same T here as you do in your state classes. But let’s stop there. The reason this won’t work is that the T parameter in this template is ultimately going to be a class derived from FSMDriver2, the very class that’s using T. What a mess. If you search the Web, you might find various workarounds for this problem, but I like the following solution. Instead of trying to coerce the compiler to accept this, remove the templates at this level by making a non-template base class, and use that class inside the driver class: class FSMStateBase2 { int somethingorother; }; class FSMDriver2 { public: int value; std::map<char *, FSMStateBase2 *> States; }; template<typename T> class FSMState2 : public FSMStateBase2 { public: T *driver; }; Now the driver contains a map of states, while the state is a template with the driver as an argument. These three classes compile fine, and this is exactly the approach I used in the real state machine code.
http://www.informit.com/articles/article.aspx?p=463940&seqNum=5
CC-MAIN-2020-10
en
refinedweb
An outlier removal assignment Project description Library for removing outliers from pandas dataframe PROJECT 2, UCS633 - Data Analysis and Visualization Takes two inputs - filename of input csv, intended filename of output csv. Installation pip install navnish For use via command line navnish in.csv out.csv For use in .py script from anshu_viv import remove_outliers remove_outliers('input.csv', 'output.csv') Project details Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/navnish/
CC-MAIN-2020-10
en
refinedweb
Python interface to the Google WebRTC Voice Activity Detector (VAD) [released with binary wheels!] Project description py-webrtcvad-wheels This is a python interface to the WebRTC Voice Activity Detector (VAD). It is compatible with Python 2 and Python 3. It is forked from wiseman/py-webrtcvad to provide releases with binary wheels. A VAD classifies a piece of audio data as being voiced or unvoiced. It can be useful for telephony and speech recognition. The VAD that Google developed for the WebRTC project is reportedly one of the best available, being fast, modern and free. How to use it Install the webrtcvad module: pip install webrtcvad Create a Vad object: import webrtcvad vad = webrtcvad.Vad() Optionally, set its aggressiveness mode, which is an integer between 0 and 3. 0 is the least aggressive about filtering out non-speech, 3 is the most aggressive. (You can also set the mode when you create the VAD, e.g. vad = webrtcvad.Vad(3)): vad.set_mode(1) Give it a short segment (“frame”) of audio. The WebRTC VAD only accepts 16-bit mono PCM audio, sampled at 8000, 16000, 32000 or 48000 Hz. A frame must be either 10, 20, or 30 ms in duration: # Run the VAD on 10 ms of silence. The result should be False. sample_rate = 16000 frame_duration = 10 # ms frame = b'\x00\x00' * int(sample_rate * frame_duration / 1000) print 'Contains speech: %s' % (vad.is_speech(frame, sample_rate) See example.py for a more detailed example that will process a .wav file, find the voiced segments, and write each one as a separate .wav. How to run unit tests To run unit tests: pip install -e ".[dev]" python setup.py test History 2.0.10 Fixed memory leak. Thank you, bond005! 2.0.9 Improved example code. Added WebRTC license. 2.0.8 Fixed Windows compilation errors. Thank you, xiongyihui! Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/webrtcvad-wheels/
CC-MAIN-2020-10
en
refinedweb
Wait for a notification that a monitor's state has changed. Syntax #include <prcmon.h> PRStatus PR_CWait( void *address, PRIntervalTime timeout); Parameters The function has the following parameters: address - The address of the protected object--the same address previously passed to PR_CEnterMonitor. timeout - The amount of time (in PRIntervalTimeunits) that the thread is willing to wait for an explicit notification before being rescheduled. If you specify PR_INTERVAL_NO_TIMEOUT, the function returns if and only if the object is notified. Returns The function returns one of the following values: PR_SUCCESSindicates either that the monitored object has been notified or that the interval specified in the timeout parameter has been exceeded. PR_FAILUREindicates either that the monitor could not be located in the cache or that the monitor was located and the calling thread was not the thread that held the monitor's mutex. Description Using the value specified in the address parameter to find a monitor in the monitor cache, PR_CWait waits for a notification that the monitor's state has changed. While the thread is waiting, it exits the monitor (just as if it had called PR_CExitMonitor as many times as it had called PR_CEnterMonitor). When the wait has finished, the thread regains control of the monitor's lock with the same entry count as before the wait began. The thread waiting on the monitor resumes execution when the monitor is notified (assuming the thread is the next in line to receive the notify) or when the interval specified in the timeout parameter has been exceeded. When the thread resumes execution, it is the caller's responsibility to test the state of the monitored data to determine the appropriate action.
https://developer.mozilla.org/en-US/docs/Mozilla/Projects/NSPR/Reference/PR_CWait
CC-MAIN-2020-10
en
refinedweb
Represents the base class for classes which provide the image options for a chart or its elements. Namespace: DevExpress.XtraCharts Assembly: DevExpress.XtraCharts.v19.2.dll public class ChartImage : ChartElement, ICustomTypeDescriptor Public Class ChartImage Inherits ChartElement Implements ICustomTypeDescriptor The ChartImage provides the following properties that are common for all the derived classes: ChartImage.Image and ChartImage.ImageUrl. These properties allows you to load an image file (for a ChartControl instance), or specify a URL to it (for a WebChartControl instance).
https://docs.devexpress.com/CoreLibraries/DevExpress.XtraCharts.ChartImage
CC-MAIN-2020-10
en
refinedweb
tensorflow:: ops:: TensorArrayGrad #include <data_flow_ops.h> Creates a TensorArray for storing the gradients of values in the given handle. Summary. )
https://www.tensorflow.org/api_docs/cc/class/tensorflow/ops/tensor-array-grad.html?hl=it
CC-MAIN-2020-10
en
refinedweb
Previous: struct that contains the data fields required by the class, and then call the class function to indicate that an object is to be created from the struct. Creating a child of an existing object is done by creating an object of the parent class and providing that object as the third argument of the class function. This is easily demonstrated by example. Suppose the programmer needs an FIR filter, i.e., a filter with a numerator polynomial but a unity denominator polynomial. In traditional octave programming, this would be performed as follows. octave:1> x = [some data vector]; octave:2> n = [some coefficient vector]; octave:3> y = filter (n, 1, x); The equivalent class could be implemented in a class directory @FIRfilter that is on the octave path. The constructor is a file FIRfilter.m in the class directory. ## -*- texinfo -*- ## @deftypefn {Function File} {} FIRfilter () ## @deftypefnx {Function File} {} FIRfilter (@var{p}) ## Create a FIR filter with polynomial @var{p} as coefficient vector. ## @end deftypefn function f = FIRfilter (p) f.polynomial = []; if (nargin == 0) p = @polynomial ([1]); elseif (nargin == 1) if (!isa (p, "polynomial")) error ("FIRfilter: expecting polynomial as input argument"); endif else print_usage (); endif f = class (f, "FIRfilter", p); endfunction As before, the leading comments provide command-line documentation for the class constructor. This constructor is very similar to the polynomial class constructor, except that we pass a polynomial object as the third argument to the class function, telling octave that the FIRfilter class will be derived from the polynomial class. Our FIR filter does not have any data fields, but we must provide a struct to the class function. The class function will add an element named polynomial to the object struct, so we simply add a dummy element named polynomial as the first line of the constructor. This dummy element will be overwritten by the class function. Note further that all our examples provide for the case in which no arguments are supplied. This is important since octave will call the constructor with no arguments when loading objects from save files to determine the inheritance structure. A class may be a child of more than one class (see the documentation for the class function), and inheritance may be nested. There is no limitation to the number of parents or the level of nesting other than memory or other physical issues. As before, we need a display method. A simple example might be function display (f) display (f.polynomial); endfunction Note that we have used the polynomial field of the struct to display the filter coefficients. Once we have the class constructor and display method, we may create an object by calling the class constructor. We may also check the class type and examine the underlying structure. octave:1> f = FIRfilter (polynomial ([1 1 1]/3)) f.polynomial = 0.333333 + 0.333333 * X + 0.333333 * X ^ 2 octave:2> class (f) ans = FIRfilter octave:3> isa (f,"FIRfilter") ans = 1 octave:4> isa (f,"polynomial") ans = 1 octave:5> struct (f) ans = { polynomial = 0.333333 + 0.333333 * X + 0.333333 * X ^ 2 } We only need to define a method to actually process data with our filter and our class is usable. It is also useful to provide a means of changing the data stored in the class. Since the fields in the underlying struct are private by default, we could provide a mechanism to access the fields. The subsref method may be used for both. function out = subsref (f, x) switch (x.type) case "()" n = f.polynomial; out = filter (n.poly, 1, x.subs{1}); case "." fld = x.subs; if (strcmp (fld, "polynomial")) out = f.polynomial; else error ("@FIRfilter/subsref: invalid property \"%s\"", fld); endif333 + 0.333333 * X + 0.333333 * X ^ 2 In order to change the contents of the object, we need to define a subsasgn method. For example, we may make the polynomial field publicly writable. function out = subsasgn (f, index, val) switch (index.type) case "." fld = index.subs; if (strcmp (fld, "polynomial")) out = f; out.polynomial = val; else error ("@FIRfilter/subsref: invalid property \"%s\"", fld); endif otherwise error ("FIRfilter/subsagn: Invalid index type") endswitch endfunction So that octave:6> f = FIRfilter (); octave:7> f.polynomial = polynomial ([1 2 3]); f.polynomial = 1 + 2 * X + 3 * X ^ 2 Defining the FIRfilter class as a child of the polynomial class implies that and FIRfilter object may be used any place that a polynomial may be used. This is not a normal use of a filter, so that aggregation may be a more sensible design approach. In this case, the polynomial is simply a field in the class structure. A class constructor for this case might be ## -*- texinfo -*- ## @deftypefn {Function File} {} FIRfilter () ## @deftypefnx {Function File} {} FIRfilter (@var{p}) ## Create a FIR filter with polynomial @var{p} as coefficient vector. ## @end deftypefn function f = FIRfilter (p) if (nargin == 0) f.polynomial = @polynomial ([1]); elseif (nargin == 1) if (isa (p, "polynomial")) f.polynomial = p; else error ("FIRfilter: expecting polynomial as input argument"); endif else print_usage (); endif f = class (f, "FIRfilter"); endfunction For our example, the remaining class methods remain unchanged. Previous: Overloading Objects, Up: Object Oriented Programming [Contents][Index]
https://octave.org/doc/v4.0.1/Inheritance-and-Aggregation.html
CC-MAIN-2020-10
en
refinedweb
Introduction to Inertia.js, we'll look at Inertia's viability in the nearest future, the advantage it will have, and how to use it in a Laravel and Vue project. What is Inertia.js? Inertia is a library that combines the best of both server-side rendering (SSR) and client-side rendering (CSR) by allowing developers to build SPAs using server-side routing and controllers. Building web applications can be a very daunting process. You have to think about if it will be a traditional server-side rendered app(SSR) or a single page application(SPA) before proceeding to pick from the many frameworks and libraries. While both server-side and client-side rendering have their pros and cons, Inertia combines the best of both worlds.What is Inertia.js? Inertia is a library that combines the best of both server-side rendering (SSR) and client-side rendering (CSR) by allowing developers to build SPAs using server-side routing and controllers. According to its official documentation:. Some might be asking is this another JavaScript framework? The documentation has this to say: The problem Inertia.js solvesThe problem Inertia.js solves Inertia isn’t a framework, nor is it a replacement to your existing server-side or client-side frameworks. Rather, it’s designed to work with them. Think of Inertia as glue that connects the two. Inertia solves many problems developers face when building modern applications. Problems like: remember, preserveState, and preserveScrollproperties to cache local component states Inertia gives full access to run specific queries on a database to get the data needed for a page while using your server-side ORM as a data source. In traditional SPAs, AJAX calls are made on every page visit to fetch data. In Inertia, an AJAX call is made to boot up the app then it maintains a persistent Vue.js instance and every subsequent page visits are made via XHR with a special X-Inertia header set to true. This triggers the server sending an Inertia response as JSON rather than making a full-page visit. It also creates a fail-safe component that wraps around a standard anchor link, it intercepts click events and prevents full page reloads from occurring. When building API-powered apps, we have to add CORS support to our app to be able to access resources on other origins. With Inertia you don’t have to worry about setting up CORS since your data is provided via your controllers and housed on the same domain as your JavaScript components. You can set up authorization on the server-side and perform authorization checks bypassing tokens as props to your page components, this helps reduce the risk of exposing important information because handling authorizations on the client can put one at the risk of an XSS attack (cross-site scripting). Inertia is both server-side and client-side framework agnostic. You can use Inertia with any server-side framework as well as any client-side framework that supports dynamic components. Inertia adapters are services(packages) that help make Inertia work well with specific frameworks, Official adapter support is currently limited to Rails, Laravel on the backend, and React, Vue.js, Svelte on the frontend. There are unofficial adapters for some other frameworks such as Symfony, Django, CakePHP, and Adonis.Is there a future for Inertia? The web is forever evolving and we’ve seen a transition from traditional server-side built monolith apps to API-powered apps. With this current trend is there a future for Inertia? Of course, the answer to the question depends on the use case and preferences. Inertia is built for people who want to build monolith applications — they generally prefer the tight coupling between their controllers and their views, but also want to build their apps using modern client-side frameworks. A majority of developers still fall into this category but with the rise and industry support for API-powered apps, we might see its usage dwindle. Of course, there are times when using Inertia might not be the best fit. Situations such as when you need multi-client support, customer-facing/marketing pages, and SEO driven websites. Using Inertia for these is probably not a good idea. But it is perfectly useful to build web apps that power dashboards and the likes.Is server-side rendering possible with Inertia? Inertia does not currently support server-side rendering but there are tools to pre-render Inertia websites, they generate and cache static HTML versions of specific routes of your websites, and then serve that content.Get started with using Inertia.js in your project This installation process makes use of Laravel for the server-side and Vue.js for the client-side, the following is required to follow along with this section: Create a new Laravel project: laravel new inertia-example Or create with composer: composer create-project --prefer-dist laravel/laravel inertia-example cd into the project: $ cd inertia-example Install Inertia’s server-side adapter using composer: composer require inertiajs/inertia-laravel Rename the welcome.blade.php file found in your resources/views folder to app.blade.php. Replace the content of your app.blade.php with this: <> The @inertia directive is a helper that creates a base div with an id of app that contains the page information, it tells Laravel that the views are generated using Inertia. Next, set up the client-side adapter by running this command in your terminal: npm install @inertiajs/inertia @inertiajs/inertia-vue #or, Using Yarn yarn add @inertiajs/inertia @inertiajs/inertia-vue Open your app.js file found in resources/js and replace the content of your app.js file with the following:) The resolveComponent callback tells Inertia how to load a page component. It receives a string as a page name and returns a page instance. To enable code-splitting we use a babel plugin for dynamic imports. First, install it by running this command: npm install @babel/plugin-syntax-dynamic-import #or, Using Yarn yarn add install @babel/plugin-syntax-dynamic-import Next, create a .babelrc file in your projects root directory with the following: { "plugins": ["@babel/plugin-syntax-dynamic-import"] } Finally, update the resolveComponent callback in your app initialization to use import instead of require. The callback returns a promise that includes a component instance, like this: ConclusionConclusion ...... new Vue({ render: h => h(InertiaApp, { props: { initialPage: JSON.parse(app.dataset.page), resolveComponent: name => import(`./Pages/${name}`).then(module => module.default), }, }), }).$mount(app) Inertia is a great library for building “hybrid” SPAs. In this article, we’ve looked at its viability in the nearest future, the advantage it has, and how to use it in a Laravel and Vue project. Checkout Inertia on Github and this article written by Jonathan Reinink to learn more. The official documentation is also well written and is an excellent resource to get started with.. We take a look at the basics of Inertia.js and build out a simple CRUD application in Laravel and Vue. What is Inertia.js? Inertia is a library that combines the best of both server-side rendering (SSR) and client-side rendering (CSR) by allowing developers to build SPAs using server-side routing and controllers. Inertia.js is a framework created by Jonathan Reinink for creating server-driven single page apps. It combines the best parts of building SPAs, while keeping the conveniences of server-driven apps. For a typical Laravel and Vue app, Inertia replaces all your blade templates with Vue Single File components allowing your application to be more interactive. We take a look at the basics of Inertia.js and build out a simple CRUD application in Laravel and Vue. Inertia GitHub: GitHub Repo for this example:.
https://morioh.com/p/f53022ec18be
CC-MAIN-2020-10
en
refinedweb
Rubycritic Sublime plugin to make a rubycritic assessment of current file, and automatically open it in the browser Details Installs - Total 3K - Win 0 - Mac 2K - Linux 1K Readme - Source - raw.githubusercontent.com SublimeRubycritic Installation Before using this plugin, you must ensure that rubycritic is installed on your system. To install rubycritic, do the following: [sudo] gem install rubycritic - If you are using rbenv, ensure that they are loaded in your shell’s correct startup file. rbenv rehash - test in sublime console (View > show console) import os os.system("rubycritic") If is result 0, then there is no need to do the next steps. If result is 32512, then sublime cannot find rubycritic. Please do these instructions in you command line: which rubycritic - copy the output and do this ln -s [OUTPUT] /usr/local/bin/rubycritic # rbenv example: ln -s /Users/YOURUSERNAME/.rbenv/shims/rubycritic /usr/local/bin/rubycritic Contributing If you would like to contribute enhancements or fixes, please do the following: - Fork the plugin repository. - Hack on a separate topic branch created from the latest master. - Commit and push the topic branch. - Make a pull request. - Be patient. ;-)
https://packagecontrol.io/packages/Rubycritic
CC-MAIN-2020-10
en
refinedweb
Week 1 Uncertainty and Probability More Machine Learning Motivation Neil’s Inaugural lecture is available here. Probability Review If you feel your basic probability needs brushing up, you might want to watch this lecture from 2012-13. It covers basic concepts of probability. Reading for Probability Review See also appendix of lecture notes below. - Bishop: pg 12-17 - Bishop: Section 1.6 & 1.6.1, skip material on pg 50-51 Exercise - Bishop: Exercise 1.3 Lecture Notes Uncertainty and Probability Lecture Slides. Reading for Week 1 - Rogers and Girolami: Chapter 2 up to page 62 - Bishop: Section 1.2.1 (pg 17-19) - Bishop: Section 1.2.2 (pg 19-20) - Bishop: Part of Section 1.2.4 (pg 24-25) - Bishop: Rest of Section 1.2.4 (pg 26-28, don’t worry about material on bias) Exercises for Week 1 - Bishop: Exercise 1.7 & 1.8 (look at and understand them - don’t need to recreate it) - Bishop: Exercise 1.9: Do it. Lab Class Probabilities with Python and the iPython Notebook. The notebook for the lab class can be downloaded from here. To obtain the lab class in ipython notebook, first open the ipython notebook. Then paste the following code into the ipython notebook import urllib urllib.urlretrieve('', 'MLAI_lab1.ipynb') You should now be able to find the lab class by clicking File->Open on the ipython notebook menu. Solutions for the lab class can be downloaded from this notebook here. Additional Material: Lecture from 2012/13 on Maximum Likelihood - Maximum Likelihood Lecture Slides\.
http://inverseprobability.com/mlai2013/week1.html
CC-MAIN-2017-43
en
refinedweb
From: Namjae Jeon <namjae.jeon@xxxxxxxxxxx>. Signed-off-by: Namjae Jeon <namjae.jeon@xxxxxxxxxxx> Signed-off-by: Ashish Sangwan <a.sangwan@xxxxxxxxxxx> Reviewed-by: Dave Chinner <dchinner@xxxxxxxxxx> --- Changelog v4: - Move block size aligned check from VFS layer to FS specific layer - update comments for FALLOC_FL_COLLAPSE_RANGE in user visible header file. - separate individual checks. - collapse range don't permit to overlap the end of file. fs/open.c | 24 +++++++++++++++++++++--- include/uapi/linux/falloc.h | 21 +++++++++++++++++++++ 2 files changed, 42 insertions(+), 3 deletions(-) diff --git a/fs/open.c b/fs/open.c index 4b3e1ed..4a923a5 100644 --- a/fs/open.c +++ b/fs/open.c @@ -231,7 +231,8 @@ int do_fallocate(struct file *file, int mode, loff_t offset, loff_t len) return -EINVAL; /* Return error if mode is not supported */ - if (mode & ~(FALLOC_FL_KEEP_SIZE | FALLOC_FL_PUNCH_HOLE)) + if (mode & ~(FALLOC_FL_KEEP_SIZE | FALLOC_FL_PUNCH_HOLE | + FALLOC_FL_COLLAPSE_RANGE)) return -EOPNOTSUPP; /* Punch hole must have keep size set */ @@ -239,11 +240,20 @@ int do_fallocate(struct file *file, int mode, loff_t offset, loff_t len) !(mode & FALLOC_FL_KEEP_SIZE)) return -EOPNOTSUPP; + /* Collapse range should only be used exclusively. */ + if ((mode & FALLOC_FL_COLLAPSE_RANGE) && + (mode & ~FALLOC_FL_COLLAPSE_RANGE)) + return -EINVAL; + if (!(file->f_mode & FMODE_WRITE)) return -EBADF; - /* It's not possible punch hole on append only file */ - if (mode & FALLOC_FL_PUNCH_HOLE && IS_APPEND(inode)) + /* + * It's not possible to punch hole or perform collapse range + * on append only file + */ + if (mode & (FALLOC_FL_PUNCH_HOLE | FALLOC_FL_COLLAPSE_RANGE) + && IS_APPEND(inode)) return -EPERM; if (IS_IMMUTABLE(inode)) @@ -271,6 +281,14 @@/include/uapi/linux/falloc.h b/include/uapi/linux/falloc.h index 990c4cc..5ff562d 100644 --- a/include/uapi/linux/falloc.h +++ b/include/uapi/linux/falloc.h @@ -5,5 +5,26 @@ #define FALLOC_FL_PUNCH_HOLE 0x02 /* de-allocates range */ #define FALLOC_FL_NO_HIDE_STALE 0x04 /* reserved codepoint */ +/* + * FALLOC_FL_COLLAPSE_RANGE is used to remove a range of a file + * without leaving a hole in the file. The contents of the file beyond + * the range being removed is appended to the start offset of the range + * being removed (i.e. the hole that was punched is "collapsed"), + * resulting in a file layout that looks like the range that was + * removed never existed. As such collapsing a range of a file changes + * the size of the file, reducing it by the same length of the range + * that has been removed by the operation. + * + * Different filesystems may implement different limitations on the + * granularity of the operation. Most will limit operations to + * filesystem block size boundaries, but this boundary may be larger or + * smaller depending on the filesystem and/or the configuration of the + * filesystem or file. + * + * Attempting to collapse a range that crosses the end of the file is + * considered an illegal operation - just use ftruncate(2) if you need + * to collapse a range that crosses EOF. + */ +#define FALLOC_FL_COLLAPSE_RANGE 0x08 #endif /* _UAPI_FALLOC_H_ */ -- 1.7.11-rc0
http://oss.sgi.com/archives/xfs/2014-02/msg00549.html
CC-MAIN-2017-43
en
refinedweb
Question: I am using the python script for separating the domain from the respective emails and then grouping emails as per their respective domain. The following script work for me: #!/usr/bin/env python3 from operator import itemgetter from itertools import groupby import os import sys dr = sys.argv[1] for f in os.listdir(dr): write = [] file = os.path.join(dr, f) lines = [[l.strip(), l.split("@")[-1].strip()] for l in open(file).readlines()] lines.sort(key=itemgetter(1)) for item, occurrence in groupby(lines, itemgetter(1)): func = [s[0] for s in list(occurrence)] write.append(item+","+",".join(func)) open(os.path.join(dr, "grouped_"+f), "wt").write("\n".join(write)) I used : python3 script.py /path/to/input files The input I gave was a list of emails and got the out as: domain1.com,gemail1@domain1.com,email2@domain.com domain2.com,email1@domain2.com,email2@domain2.com,email3@domain2.com But what the problem am facing is because of the MongoDB limit. As MongoDB has limit of 16 MB of document size and single line in my output file is considered as 1 document by MongoDB and the line size should not go beyond 16 MB. So what I want to have is the result should get limited to 21 emails per domain and if the domain has more emails then it should be printed on a new line with the rest emails (again if emails are exceeding 21 then newline with same domain name). I cam store duplicate data in the mongoDB. So the final output should be something like the following: domain1.com,email1@domain1.com,email2@domain1.com,... email21@domain1.com domain1.com,email22@domain1.com,..... domain2.com,email1@domain2.com,.... The dot (.) in the above example represents many text, which I chopped to make it simple to understand. Hope this clarify my problem and hoping to get a solution for it. Solution:1 New version The script you posted indeed groups the emails by domain, with no limit in number. Below a version that will group emails by domain, but split the found list into arbitrary chunks. Each chunk will be printed into a line, starting with the corresponding domain. The script #!/usr/bin/env python3 from operator import itemgetter from itertools import groupby, islice import os import sys dr = sys.argv[1] size = 3 def chunk(it, size): it = iter(it); return iter(lambda: tuple(islice(it, size)), ()) for f in os.listdir(dr): # list the files with open(os.path.join(dr, "chunked_"+f), "wt") as report: file = os.path.join(dr, f) # create a list of email addresses and domains, sort by domain lines = [[l.strip(), l.split("@")[-1].strip()] for l in open(file).readlines()] lines.sort(key=itemgetter(1)) # group by domain, split into chunks for domain, occurrence in groupby(lines, itemgetter(1)): adr = list(chunk([s[0] for s in occurrence], size)) # write lines to output file for a in adr: report.write(domain+","+",".join(a)+"\n") To use - Copy the script into an empty file, save it as chunked_list.py In the head section, set the chunk size: size = 5 Run the script with the directory as argument: python3 /path/to/chunked_list.py /path/to/files It wil then create an edited file of each of the files, named chunked_filename, with the (chunked) grouped emails. What it does The script takes as input a directory with files like: email1@domain1 email2@domain1 email3@domain2 email4@domain1 email5@domain1 email6@domain2 email7@domain1 email8@domain2 email9@domain1 email10@domain2 email11@domain1 Of each file, it creates a copy, like: domain1,email1@domain1,email2@domain1,email4@domain1 domain1,email5@domain1,email7@domain1,email9@domain1 domain1,email11@domain1 domain2,email3@domain2,email6@domain2,email8@domain2 domain2,email10@domain2 (set cunksize = 3) Solution:2 To support arbitrary large directories and files, you could use os.scandir() receiving files one by one and processing the files line by line: #!/usr/bin/env python3 import os def emails_with_domain(dirpath): for entry in os.scandir(dirpath): if not entry.is_file(): continue # skip non-files with open(entry.path) as file: for line in file: email = line.strip() if email: # skip blank lines yield email.rpartition('@')[-1], email # domain, email To group email addresses by domain, no more than 21 emails per line, you could use collections.defaultdict(): import sys from collections import defaultdict dirpath = sys.argv[1] with open('grouped_emails.txt', 'w') as output_file: emails = defaultdict(list) # domain -> emails for domain, email in emails_with_domain(dirpath): domain_emails = emails[domain] domain_emails.append(email) if len(domain_emails) == 21: print(domain, *domain_emails, sep=',', file=output_file) del domain_emails[:] # clear for domain, domain_emails in emails.items(): print(domain, *domain_emails, sep=',', file=output_file) Note: - all emails are saved to the same file - lines with the same domain are not necessarily adjacent See What is the most "pythonic" way to iterate over a list in chunks? Note:If u also have question or solution just comment us below or mail us on toontricks1994@gmail.com EmoticonEmoticon
http://www.toontricks.com/2017/10/ubuntu-python-script-how-to-chop-output.html
CC-MAIN-2017-43
en
refinedweb
public class ScriptFreeTLV extends TagLibraryValidator A TagLibraryValidator for enforcing restrictions against the use of JSP scripting elements. This TLV supports four initialization parameters, for controlling which of the four types of scripting elements are allowed or prohibited: The default value for all for initialization parameters is false, indicating all forms of scripting elements are to be prohibited. getInitParameters, release clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait public ScriptFreeTLV() public void setInitParameters(Map<String,Object> initParms) setInitParametersin class TagLibraryValidator initParms- a mapping from the names of the initialization parameters to their values, as specified in the TLD. public ValidationMessage[] validate(String prefix, String uri, PageData page) validatein class TagLibraryValidator prefix- the namespace prefix specified by the page for the custom tag library being validated. uri- the URI specified by the page for the TLD of the custom tag library being validated. page- a wrapper around the XML representation of the page being validated. Copyright © 1996-2015, Oracle and/or its affiliates. All Rights Reserved. Use is subject to license terms.
http://docs.oracle.com/javaee/7/api/javax/servlet/jsp/jstl/tlv/ScriptFreeTLV.html
CC-MAIN-2016-22
en
refinedweb
Yea, it looks straight-forward to implement. The range pattern is kind of interesting and the callbacks would be useful. Personally, I dislike the global (single) callback style. I would prefer to see the standard use callbacks bound to a given vibration function. On Fri, Oct 28, 2011 at 11:10 AM, Brian LeRoux <b@brian.io> wrote: > absolutely. will be trivial given the capability already exists in the > notification namespace --- can wait until we get the plugin work done > > On Thu, Oct 27, 2011 at 5:44 PM, Abu Obeida Bakhach > <Abu.Obeida@microsoft.com> wrote: > > Any plans to implement this for any of the devices? > > > > -----Original Message----- > > From: brian.leroux@gmail.com [mailto:brian.leroux@gmail.com] On Behalf > Of Brian LeRoux > > Sent: Thursday, October 27, 2011 10:50 AM > > To: callback-dev@incubator.apache.org > > Subject: new vibration api > > > > > > > > >
http://mail-archives.apache.org/mod_mbox/incubator-callback-dev/201110.mbox/%3CCAP7NMPrFXFWq+Zj5baizCaNfgVa4cHhc_OEKHUH2uscAbQLYqQ@mail.gmail.com%3E
CC-MAIN-2016-22
en
refinedweb
Hi!Hi! I'm very new to javascript and i'm trying to write a script that allows me to: -add to and substract from a value called temperaturethrough methods warmerand cooler -read temperature -set a maximum and minimum value for temperature this is what i've come up with: The code worked fine until I added the conditions to 'warmer' and 'cooler'. Now, however, only theThe code worked fine until I added the conditions to 'warmer' and 'cooler'. Now, however, only theCode:public class Heater { private int temperature; private int min; private int max; private int increment; private int recMin; private int recMax; public Heater() { temperature = 15; min = recMin; max = recMax; increment = 5; } public void insrtMin(int minimum) { recMin = minimum; } public void insrtMax(int maximum) { recMax = maximum; } public void warmer() { if ((temperature + increment) > max) { } else { temperature = temperature + increment; } } public void cooler() { if ((temperature - increment) < min) { } else { temperature = temperature - increment; } } public int returnTemp() { return temperature; } } coolermethod seems to work. What did I miss? thanks!
http://www.codingforums.com/java-and-jsp/282015-conditions-error.html?s=ac24cd2293a57ac2166296e80d9b8d3a
CC-MAIN-2016-22
en
refinedweb
.NET Remoting with Events in Visual C++ I need the static_cast<> because ArrayList is a collection of Object references, so I need to cast them back to Greeter references as they come out of the collection. If you find this code ugly, you could use an STL collection; just remember the gcroot template that enables you to put managed types into an STL collection. (This is going to be so much neater in Whidbey.) Notice how Alert() gets hold of each item in the collection, and after casting it, calls the RemoteAlertEvent() event as though it was a function. This will raise the event and invoke all the handlers that have been added to the event. There's no code here on the server to add handlers to the event, though. You'll see that on the client side. So that the event can easily be raised, I added a button to the server form and had the event handler for the button click call the static Alert() method: private: System::Void Alert_Click(System::Object * sender, System::EventArgs * e) { Greeting::Greeter::Alert("The server Alert button was clicked"); } The last step on the server side is to change the configuration file. This element needs to be expanded: <channel ref="tcp" port="9555" /> It ends up looking like this: <channel ref="tcp" port="9555"> <serverProviders> <formatter ref="binary" typeFilterLevel="Full" /> </serverProviders> </channel> This change is required in version 1.1 of the .NET Framework and up; older samples or articles that cover remoting will not mention it. Security settings in version 1.1 do not support callbacks by default because they might represent a vulnerability. Client code is letting server code decide to trigger the execution of client code. You have to deliberately turn the feature on. Changing the Client The first step on the client side is to define and implement the handler class, RemoteHandler: namespace GreetingClient { public __gc class RemoteHandler: public Greeting::RemoteHandlerBase { protected: void HandleAlert(String* msg); }; } Because RemoteHandler inherits from RemoteHandlerBase, which in turn inherits from MarshalByRefObject, instances of this class can be passed over remoting by reference. These references are used to invoke the HandleAlert() method when the event is raised. The implementation of HandleAlert() is nice and simple: void RemoteHandler::HandleAlert(String* msg) { Windows::Forms::MessageBox::Show(msg,"Alert from server"); } Just as the server configuration file needed to be changed to permit callbacks, so does the client. After the existing <client>...</client> element, I added a channel element: <channel ref="tcp" port="0"> <serverProviders> <formatter ref="binary" typeFilterLevel="Full" /> </serverProviders> </channel> As in the server configuration file, this element takes care of the security restrictions, making it clear I am deliberately using callbacks over remoting and that I trust the server application to trigger execution of parts of the client application. The <channel> element specifies a port of 0, so any available port can be used. The client constructor gains another line of remoting "plumbing." (It's still a lot less code than you would write with DCOM.) After the call to Configure(), I added this line to set up the callback channel: ChannelServices::RegisterChannel(new Tcp::TcpServerChannel ("callback", 9556)); (This needs a using namespace Runtime::Remoting::Channels to find the classes.) Choose any port you like that's different from the one you're accessing the remoted object over, and that's unlikely to be in use by anyone else. You may be wondering how event handlers get added to the list that the server is keeping. I just use the remoted instance. In the Form1 constructor, there is already a line to create the remote object: greet = new Greeting::Greeter(); Right after that line, I added: RemoteHandler* rh = new RemoteHandler(); Greeting::RemoteHandlerBase* rhb = static_cast <Greeting::RemoteHandlerBase*>(rh); greet->RemoteAlertEvent += new RemoteAlert(rhb, &RemoteHandler::Alert); This code creates an instance of the RemoteHandler class, defined in the client. It then casts that instance to a RemoteHandlerBase* because the server is only aware of the RemoteHandlerBase class. (RemoteHandlerBase is in the Greeting assembly, and the client has a reference to that assembly, so client code knows about both RemoteHandler and RemoteHandlerBase.) The final line of this code snippet creates a delegate using the special C++ syntax. The first parameter to the delegate constructor is a pointer to the object, and the second parameter uses the pointer-to-member syntax to create a function pointer. Once constructed, the delegate is added directly to the event handler list in the Greeting object by accessing the public variable and using the += operator. That's it! The client has code to make an instance of the handler object and add it to the list on the server. It also has an implementation of the handler method. The server has the delegate definition, and code to maintain a list of event handlers then raise the event to them. The configuration files have been tweaked to allow events to pass over remoting. Trying It Out If you built the code for my previous column, and made the changes I've shown here, you can test it quite simply. Rebuild the entire solution and copy greetingserver.exe, greetingserver.exe.config, and greeting.dll to your second machine. Start the server and click Listen. Go to your first machine and start the client. If you want, make sure that Greet() and GetRecords() still work. Then, on the server, click the new button that raises the event. Nothing should appear to happen on the server. Go back to the client and you should see a message box. If you do, that means you raised an event on the server that was handled on the client. The possibilities for that are tremendous. This is a huge advantage of remoting over Web services and one I encourage you to explore a bit<<
http://www.developer.com/net/cplus/article.php/10919_3339611_2/NET-Remoting-with-Events-in-Visual-C.htm
CC-MAIN-2016-22
en
refinedweb
- Author: - jerzyk - Posted: - December 11, 2007 - Language: - Python - Version: - .96 - view request post - Score: - -2 (after 6 ratings) This will return HTTP 405 if request was not POSTed. same way you can forbide POST request, change 'POST' to 'GET' Decorators provided for your convenience. More like this - Get boolean value from request send by Ajax by zalun 6 years, 10 months ago - Allow separation of GET and POST implementations by agore 3 years, 11 months ago - Get typed dictionary from request GET or POST prameters (MergeDict) by pahaz 3 years, 3 months ago - Declaring django views like web.py views by danigm 6 years, 3 months ago - Simple views method binding by SpikeekipS 7 years, 11 months ago django has it's own decorators from django.views.decorators.http import require_http_methods, require_GET, require_POST # Please login first before commenting.
https://djangosnippets.org/snippets/505/
CC-MAIN-2016-22
en
refinedweb
Basic authentication enables you to require credentials, in the form of a username and password, to make a transaction. These credentials are transmitted as plain text. The username and password are encoded as a sequence of base-64 characters before transmission to ensure privacy. So, for example, the user name “Fred” and password “Dinosaur” are combined as “Fred:Dinosaur.” When encoded in base-64, these characters are equivalent to “RnJlZDpEaW5vc2F1cg0K”. For a Provider web service, a request message from a client contains the user name and password fields in the request header. Authentication: Basic and Digest Access Authentication). Basic authentication is supported by specifying a policy in the WSDL. A basic authentication policy can be added to the WSDL either manually or by using the WS-Policy Attachment window accessed from CASA and provided through Tango (WSIT). A basic authentication policy is specified at the root level of the WSDL and a reference to the policy is made in the WSDL Port type section, binding the policy to the endpoint. To support basic authentication, the HTTP Binding Component defines the following WSDL elements: MustSupportBasicAuthentication: This element has an attribute called on which can be used to turn authentication on or off. This attribute accepts the values true or false. The MustSupportBasicAuthentication element within a policy is required to enable basic authentication in the endpoint. UsernameToken: This element specifies the user name and password fields for one of the following actions: Authenticate the request when the endpoint is a provider Invoke a web service with basic authentication enabled when the configured endpoint is a consumer The user name and password fields can be specified either as plain text in the WSDL, or as tokens in the WSDL and configured at runtime. Three types of authentication mechanisms are supported for web service consumer endpoints. A consumer endpoint can be configured to use one of these mechanisms by adding it as a child element to the MustSupportBasicAuthentication element of the endpoints Policy. WssTokenCompare Username/Password Authentication: Compares the username and password extracted from the HTTP Authorization request header with the username and password specified in the Policy's WssUsernameToken10 and WssPassword elements. AccessManager: Configures the consuming endpoint to use the Sun Access Manager to authenticate the HTTP client's credentials. Realm: Configures the consuming endpoint to use the Sun Realm security to authenticate the HTTP client's credentials. The following sections describe these mechanisms in more detail. To use the WssTokenCompare feature, the Policy element must be present, and specify the username and password that are used for authentication. The username and password extracted from the HTTP Authorization request header are compared with the username and password specified in the Policy's WssUsernameToken10 and WssPassword elements. The following sample WSDL contains the policy and its reference to use WssTokenCompare. Note that an application variable token is used for the password so that the password is not exposed in the WSDL. The value of the password can be specified in the component's Application Variable property in NetBeans. The code displayed above is wrapped for display purposes. To use Access Manager to configure access-level authorization, you configure the consuming endpoint to use the Sun Access Manager to authenticate the client's credentials. The HTTP Binding Component SOAP binding integrates seamlessly with Sun Access Manager to authenticate the HTTP client's credentials (the username and password extracted from the HTTP Authorization header) against the user's credentials in the Sun Access Manager database. To configure the HTTP/SOAP Binding Component to use Access Manager, set the HTTP Binding Component Runtime property Sun Access Manager Configuration Directory value to the directory where the Sun Access Manager's AMConfig.properties file can be found. To configure the Sun Access Manager Configuration Directory, do the following: Access the HTTP Binding Component Properties from the NetBeans Services window. Right-click sun-http-binding under Servers > GlassFish V2 > JBI > Binding Components, and choose Properties from the pop-up menu. Configure the Sun Access Manager Configuration Directory property to specify the location of the Sun Access Manager's AMConfig.properties file. Configure the policy in the WSDL to enable Authorization by changing the Access Manager authorization attribute to true (note the attribute authorization="true" in the example below). This attribute is optional and the default value is false. The following sample WSDL contains the policy and its reference to use AccessManager. For a tutorial demonstrating how to secure communications between a service client and server using the Sun Java System Access Manager, see: Securing Communications in OpenESB with Sun Access Manager. The HTTP Binding Component can integrate with GlassFish Application Server, out of the box, to provide authentication of requesting clients by authenticating the client against the credentials in a "realm". To take advantage of this security feature, the HTTP/SOAP Binding Component's consuming endpoint needs to be properly configured in the WSDL. To configure an HTTP/SOAP endpoint to use Realm security configure the PolicyReference element which belongs in the namespace,. The PolicyReference identifies the Policy, which also belongs in the namespace,, that provides the details for configuring Realm security. This is an example of an endpoint with an associated PolicyReference element. The PolicyReference element contains an attribute called URI. The value of the URI consists of a '#' character followed by the name of the policy defined somewhere else in the WSDL. Taking this example further, the example below defines the Policy that the PolicyReference references. In the following example, ignore the UsernameToken. This is used by the "outbound" endpoint for sending the username/password credential when it sends a request. You don't need to have this element for "inbound" (consuming) endpoints, but it's included here to illustrate the bi-directionality of an endpoint. The code above has been wrapped for display purposes The PolicyReference and Policy elements are used above simply to ensure that we adhere to the standard for SOAP binding. There are no Tango WS-Policy Attachments involved and the WS-Policy Attachment "runtime" will ignore the child element MustSupportBasicAuthentication which is specific to the HTTP Soap BC. MustSupportBasicAuthentication is in the namespace,. For example, your GlassFish installation comes with a preconfigured file realm which is essentially a file-based user database. See the GlassFish documentation on Realm security, or for a demonstration of how Realm security is configured for a SOAP endpoint see Securing Communication using Glassfish Realm Security.
http://docs.oracle.com/cd/E19182-01/820-0595/ghlft/index.html
CC-MAIN-2016-22
en
refinedweb
MLPutReal32() This feature is not supported on the Wolfram Cloud. This feature is not supported on the Wolfram Cloud. DetailsDetails - The argument x is typically declared as float in external programs, but must be declared as double in MLPutReal32() itself in order to work even in the absence of C prototypes. - MLPutReal32() returns 0 in the event of an error, and a nonzero value if the function succeeds. - Use MLError() to retrieve the error code if MLPutReal32() fails. - MLPutReal32() is declared in the MathLink header file mathlink.h. ExamplesExamplesopen allclose all Basic Examples (1)Basic Examples (1) #include "mathlink.h" /* send the number 3.4 to a link */ void f(MLINK lp) { float numb = 3.4; if(! MLPutReal32(lp, numb)) { /* unable to send 3.4 to lp */ } }
http://reference.wolfram.com/language/ref/c/MLPutReal32.html
CC-MAIN-2016-22
en
refinedweb
en.wikipedia.org/wiki/Rule_of_72 In finance, the A shortcut to estimate the number of years required to double your money at a given annual rate of return (see compound annual growth rate). The Do you know the betterexplained.com/articles/the-rule-of-72/ Jan 25, 2007 genxfinance.com/use-the-rule-of-72-to-understand-compound-interest/ If you want to quickly determine how long it will take for your money to double, the beginnersinvest.about.com/cs/21jumpstreet/a/012501a.htm Mar 28, 2016 How to Use the For example, using the Dec 29, 2014 Jan 19, 2016 Using the Take a closer look. You are 24 and have $3,000 in savings. You put it in an account that you expect to earn 8%. According to the Power of
http://www.ask.com/web?q=Rule+of+72&oo=2603&o=0&l=dir&qsrc=3139&gc=1&qo=popularsearches
CC-MAIN-2016-22
en
refinedweb
Hacking / Customizing a Kobo Touch ebook reader: Part II, Python I wrote last week about tweaking a Kobo e-reader's sqlite database by hand. But who wants to remember all the table names and type out those queries? I sure don't. So I wrote a Python wrapper that makes it much easier to interact with the Kobo databases. Happily, Python already has a module called sqlite3. So all I had to do was come up with an API that included the calls I typically wanted -- list all the books, list all the shelves, figure out which books are on which shelves, and so forth. The result was kobo_utils.py, which includes a main function that can list books, shelves, or shelf contents. You can initialize kobo_utils like this: import kobo_utils koboDB = KoboDB("/path/where/your/kobo/is/mounted") koboDB.connect("/path/to/KoboReader.sqlite") connect()throws an exception if it can't find the .sqlite file. Then you can list books thusly: koboDB.list_books()or list shelf names: koboDB.list_shelves()or use print_shelfwhich books are on which shelves: shelves = koboDB.get_dlist("Shelf", selectors=[ "Name" ]) for shelf in shelves: print shelf["Name"] What I really wanted, though, was a way to organize my library, taking the tags in each of my epub books and assigning them to an appropriate shelf on the Kobo, creating new shelves as needed. Using kobo_utils.py plus the Python epub library I'd already written, that ended up being quite straightforward: shelves_by_tag. [ 20:38 Sep 15, 2015 More tech | permalink to this entry | comments ]
http://shallowsky.com/blog/tags/ebook/
CC-MAIN-2016-22
en
refinedweb
PROfile All time reputation: 80 Achievement johnloy's Recent JavaScript Snippets - All / - JavaScript / - HTML / - PHP / - CSS / - Ruby / - Objective C « Prev [Page 1 of 1] Next » JavaScript javascript Way in JS to call singleton library methods within a namespace without fully qualifying the namespace in references posted on March 21, 2010 by johnloy JavaScript regex javascript parseUri 1.2: Split URLs in JavaScript posted on March 7, 2010 by johnloy JavaScript javascript oop Custom error objects in Javascript posted on February 26, 2010 by johnloy JavaScript javascript debugging firebug saved by 1 person Prevent error messages in the cases where a console object doesn't exist posted on February 14, 2010 by johnloy JavaScript javascript strings utility commafy a number in javascript posted on April 16, 2009 by johnloy
http://snipplr.com/users/johnloy/language/javascript
CC-MAIN-2016-22
en
refinedweb
Fun with Modern C++ and Smart Pointers modern c++ smart pointers unique_ptr shared_ptr modern C++ _________________ This is a journal entry not an article so I feel like I can ramble a bit before I get to the main point. I have been programming for some time now and got my CS degree way back in 2003, but in 2007, I decided to take a break from programming, moved to South Korea, and have been an English teach ever since. I started to program again and with the help of an artist friend of mine, made this for last year's IGF. We obviously didn't win and haven't decided whether or not to continue with the project. Anyway, because of my long hiatus from programming, quite a few things have passed me by, namely modern C++ and all of this smart pointer stuff. I've decided to start writing code not just for myself, but code that others could use so I want to update my knowledge of C++ and explore smart pointers. In this journal entry I want to share what I've learned. I don't claim to be an expert, so if you have any useful insights or corrections, I'd appreciate a "friendly" comment. Raw Pointer Primer There are two basic ways to create a variable in C++: stack variables and heap allocations. Stack variables are defined inside a certain scope and as long as you're in that scope, the variable will exist. Heap allocations (dynamically allocated memory), variables typically declared with new or malloc, aren't not defined within a scope and will exist until the memory is freed. Here's an example: void foo() { // Object A of class CSomeClass has been declared inside the scope of foo CSomeClass A; // do some stuff .... // you can even call other functions and use A as a parameter Func1(A); // This could be pass by value or pass by reference depending on Func1's declaration Func2(&A); // Passes a pointer to A // at the end of this function, the scope will end and A will automatically be destroyed }Now with this function, every time another function calls foo, A will be created and then destroyed when the function exits. Not bad right. What about this? void foo() { // Object A of class CSomeClass has been declared inside the scope of foo CSomeClass *A = new CSomeClass; // do some stuff .... // you can even call other functions and use A as a parameter Func1(*A); // This could be pass by value or pass by reference depending on Func1's declaration Func2(A); // Passes pointer A of CSomeClass (Edited) // MEMORY LEAK // at the end of this function, the scope will end, but A was created on the heap // delete should be called here }So with dynamic memory allocations, you must free the memory. So why you might ask do we even need dynamic memory allocations? Well one, to declare variables on the stack, you need to know exactly what you'll need at compile time. If you want to be able to create arrays of various sizes depending on user input, or if you're making a game and want to load variable amount of resources, you'll need to use dynamic memory allocations. Take this example: int num_students; // First get the number of students in the class std::cout << "How many students are in the class?" std::cin >> num_students; // Create a dynamic students array CStudent *student_array = new CStudent[num_students]; // Do some stuff with the data .... // call the array version of delete to free memory delete [] student_array;In the previous situation, you must use dynamic memory because the size of the array is determined by the user. How can smart pointers help? Smart Pointers (chief reference: Smart Pointers (Modern C++) on MSDN) Smart pointers allow you to create dynamic memory allocations but defined them inside a scope or an owner. This way, when the owner goes out of focus, the data will be automatically deleted. Smart pointers have been implemented using templates and to use them, you must include the header <memory>. Smart pointers are in the std namespace. I will only discuss unique_ptr's here. Later, I may talk about the other types of smart pointers in between updates to "A Complete Graphicsless Game" So how do you create a unique_ptr? std::unique_ptr<Base> apples(new Base(L"apples")); // Where Base is the class typeYou create a unique_ptr like a typical template and then pass the raw pointer to initialize the variable. After that, you can use the unique_ptr, just as you would any other pointer. class Base { public: Base(const std::wstring &string) :m_string(string) { } virtual void Display() const { std::wcout << L"Base:" << m_string << std::endl; } private: std::wstring m_string; }; int main() { // declare some unique_ptrs. This pointers can have only one owner and cannot be copied. Only moved. std::unique_ptr<Base> apples(new Base(L"apples")); apples->Display(); }unique_ptr's can also be passed to other functions, but you must pass by reference. Passing by value will result in a compiler error. With unique_ptr's, only one can own the pointer, but if you pass by value you will make a copy of the unique_ptr which in essence makes two unique_ptr's that own the same block of memory. You can also use unique_ptr's with derived class and virtual functions and get the typical C++ behavior. class Base { public: Base(const std::wstring &string) :m_string(string) { } virtual void Display() const { std::wcout << L"Base:" << m_string << std::endl; } private: std::wstring m_string; }; class Derived : public Base { public: Derived(const std::wstring &string) :Base(string) { } virtual void Display() const { std::wcout << L"Derived:::"; __super::Display(); // __super is MS specific. Others should use Base::Display(); } }; int main() { // declare some unique_ptrs. This pointers can have only one owner and cannot be copied. Only moved. std::unique_ptr<Base> apples(new Base(L"apples")); std::unique_ptr<Base> oranges(new Derived(L"oranges")); apples->Display(); oranges->Display(); }This is very useful when dealing with vectors as the next example show. std::vector <std::unique_ptr<Base>> test_vector; test_vector.push_back(std::unique_ptr<Base>(new Base(L"apples"))); test_vector.push_back(std::unique_ptr<Base>(new Derived(L"oranges")));In the above example, you can use the vector of unique_ptr's in the same way you would if it was a vector of raw Base pointers, but there is one nice benefit. If this vector had raw pointers, you'd have to make sure you manual free the pointers before you clear the vector or erase an element, but with unique_ptr's, all of that is handled for you. Conclusion I hope this information was helpful. I am in no way an expert so if you anyone sees something glaring that I missed, feel free to leave a "kind" comment. If you would like to know more about smart pointers, I suggest checking out the like on msdn. Sorry. I tried editing this on my cell and it messed up all my code blocks. I'll fix the rest when I get home and on a proper computer Edit: Fixed
http://www.gamedev.net/blog/1670/entry-2256432-fun-with-modern-c-and-smart-pointers/
CC-MAIN-2016-22
en
refinedweb
Difference between revisions of "EclipseLink/Development/404452" Revision as of 15:47, 17 September 2013 Contents - 1 Planning: MOXy JSON Schema Generation - 2 Overview - 3 Phases.); } Phases The development of this feature will need to be split into multiple phases. The current EclipseLink JSON support is insufficient to map to a json schema document. Additional enhancements will be required in order to generate these schemas. Phase 1: Additional Mapping Support * Completed * - *Completed* *Completed*": { "ref":"#/definitions/Department" } }, "required": ["id"] "definitions" : { "Address": { "type":"object", "properties": { "street": { "type":"string" }, "city": { "type":"string" }, "country": { "type":"string" } } }, "PhoneNumber":{ "type":"object", "properties": { "areaCode": { "type":"string" }, "number": { "type":"string" } } }, "Department":{ "enum":["dev", "support", "sales", "qa"] } } Phase 4 - Additional Mappings and Properties - *Completed*()" < Known Issues/Bugs - *In Progress* - XmlVariableXPathObjectMapping/XmlVariableXPathCollectionMapping Resolved Issues/Bugs - Bug 410638 CompositeMapping with no reference descriptor causes NPE - Cyclic references ie: Employee has a List<Employee> causes infinite loop - @XmlValue annotation (see org.eclipse.persistence.testing.jaxb.xmlidref.XmlIdRefTestCases) - BinaryData/BinaryDataCollectionMapping - XmlObjectReferenceMapping/XmlCollectionReferenceMapping - AnyObject/AnyCollectionMapping - Bug 410658 JSON_INCLUDE_ROOT property defaults to true, fixed, but may not be in time for 2.5.1 - NAMESPACE_PREFIX_MAPPER & JSON_NAMESPACE_SEPARATOR - If both of these are set then namespace processing will be enabled, and property names will be prepended with the prefix corresponding to their namespace. - XmlInverseReferenceMapping (see org.eclipse.persistence.testing.jaxb.annotations.xmlcontainerproperty.ContainerPropertyTestCase)
http://wiki.eclipse.org/index.php?title=EclipseLink/Development/404452&diff=prev&oldid=347418
CC-MAIN-2016-22
en
refinedweb
I am reading in a text file and basically, when a line has the user defined string, it prints the line and counts each time that word occurred. However, according to Notepad++, the word "the" should occur 3,781 times, but when I run the program I only get 3,404 times... and I believe it has to do with the string member function find(). /******************************************** * File: stringSearch.cpp * Purpose: * Write a program that asks for a user to * enter the name of a file and a string * to search for. The program will then * display all lines the string occurs in * and how many times it occurs. **********************************************/ #include <iostream> #include <fstream> #include <string> using namespace std; /*the = 3404 times, should be 3,781*/ int main() { string infile, stringBuffer, stringBufCopy, userString; size_t strFound; int stringCount = 0; int stringIt = 0; cout << "Enter file name: "; getline(cin, infile); fstream file(infile, ios::in); if(!file) { while(!file) { cout << "Invalid file entry!" << endl; cout << "Re-enter file: "; getline(cin, infile); fstream file(infile, ios::in); } } cout << "Enter string to search for: "; getline(cin, userString); while(!file.eof()) { getline(file, stringBuffer, '.'); // Get line from file until a '.' occurs stringBufCopy = stringBuffer; strFound = stringBuffer.find(userString); if(strFound != string::npos) { while(strFound != string::npos) { stringCount++; cout << stringBuffer << endl; strFound = stringBuffer.find(userString, strFound+1); // HERE maybe? } } } cout << "The string: \"" << userString << "\" was found " << stringCount << " times!" << endl; file.close(); cin.clear(); cin.sync(); cin.get(); return 0; } Also, one more question... What does string::npos mean? EDIT:: I know there are some redundancies and I will change and modify, but I have just been moving things around and adding and removing stuff, so for now, it's just getting it working properly, and will then take out redundancies and useless code. This post has been edited by IngeniousHax: 27 June 2012 - 04:35 PM
http://www.dreamincode.net/forums/topic/284139-quick-logic-question/
CC-MAIN-2016-22
en
refinedweb
URL Loading Introduction This section describes how to use the URLLoader API to load resources such as images and sound files from a server into your application. The example discussed in this section is included in the SDK in the directory examples/api/url_loader. Reference information For reference information related to loading data from URLs, see the following documentation: - url_loader.h - Contains URLLoaderclass for loading data from URLs - url_request_info.h - Contains URLRequestclass for creating and manipulating URL requests - url_response_info.h - Contains URLResponseclass for examaning URL responses Background When a user launches your Native Client web application, Chrome downloads and caches your application’s HTML file, manifest file (.nmf), and Native Client module (.pexe or .nexe). If your application needs additional assets, such as images and sound files, it must explicitly load those assets. You can use the Pepper APIs described in this section to load assets from a URL into your application. After you’ve loaded assets into your application, Chrome will cache those assets. To avoid being at the whim of the Chrome cache, however, you may want to use the Pepper FileIO API to write those assets to a persistent, sandboxed location on the user’s file system. The url_loader example The SDK includes an example called url_loader demonstrating downloading files from a server. This example has these primary files: index.html- The HTML code that launches the Native Client module. example.js- The JavaScript file for index.html. It has code that sends a PostMessage request to the Native Client module when the “Get URL” button is clicked. url_loader_success.html- An HTML file on the server whose contents are being retrieved using the URLLoaderAPI. url_loader.cc- The code that sets up and provides and entry point into the Native client module. url_loader_handler.cc- The code that retrieves the contents of the url_loader_success.html file and returns the results (this is where the bulk of the work is done). The remainder of this document covers the code in the url_loader.cc and url_loader_handler.cc files. URL loading overview Like many Pepper APIs, the URLLoader API includes a set of methods that execute asynchronously and that invoke callback functions in your Native Client module. The high-level flow for the url_loader example is described below. Note that methods in the namespace pp::URLLoader are part of the Pepper URLLoader API, while the rest of the functions are part of the code in the Native Client module (specifically in the file url_loader_handler.cc). The following image shows the flow of the url_loader_handler code: Following are the high-level steps involved in URL loading. - The Native Client module calls pp::URLLoader::Opento begin opening the URL. - When Opencompletes, it invokes a callback function in the Native Client module (in this case, OnOpen). - The Native Client module calls the Pepper function URLLoader::ReadResponseBodyto begin reading the response body with the data. ReadResponseBodyis passed an optional callback function in the Native Client module (in this case, On Read). The callback function is an optional callback because ReadResponseBodymay read data and return synchronously if data is available (this improves performance for large files and fast connections). The remainder of this document demonstrates how the previous steps are implemented in the url_loader example. url_loader deep dive Setting up the request HandleMessage in url_loader.cc creates a URLLoaderHandler instance and passes it the URL of the asset to be retrieved. Then HandleMessage calls Start to start retrieving the asset from the server: void URLLoaderInstance::HandleMessage(const pp::Var& var_message) { if (!var_message.is_string()) { return; } std::string message = var_message.AsString(); if (message.find(kLoadUrlMethodId) == 0) { // The argument to getUrl is everything after the first ':'. size_t sep_pos = message.find_first_of(kMessageArgumentSeparator); if (sep_pos != std::string::npos) { std::string url = message.substr(sep_pos + 1); printf("URLLoaderInstance::HandleMessage('%s', '%s')\n", message.c_str(), url.c_str()); fflush(stdout); URLLoaderHandler* handler = URLLoaderHandler::Create(this, url); if (handler != NULL) { // Starts asynchronous download. When download is finished or when an // error occurs, |handler| posts the results back to the browser // vis PostMessage and self-destroys. handler->Start(); } } } } Notice that the constructor for URLLoaderHandler in url_loader_handler.cc sets up the parameters of the URL request (using SetURL, SetMethod, and SetRecordDownloadProgress): URLLoaderHandler::URLLoaderHandler(pp::Instance* instance, const std::string& url) : instance_(instance), url_(url), url_request_(instance), url_loader_(instance), buffer_(new char[READ_BUFFER_SIZE]), cc_factory_(this) { url_request_.SetURL(url); url_request_.SetMethod("GET"); url_request_.SetRecordDownloadProgress(true); } Downloading the data Start in url_loader_handler.cc creates a callback ( cc) using a CompletionCallbackFactory. The callback is passed to upon its completion. Open begins loading the URLRequestInfo. void URLLoaderHandler::Start() { pp::CompletionCallback cc = cc_factory_.NewCallback(&URLLoaderHandler::OnOpen); url_loader_.Open(url_request_, cc); } OnOpen ensures that the Open call was successful and, if so, calls GetDownloadProgress to determine the amount of data to be downloaded so it can allocate memory for the response body. Note that the amount of data to be downloaded may be unknown, in which case GetDownloadProgress sets total_bytes_to_be_received to -1. It is not a problem if total_bytes_to_be_received is set to -1 or if GetDownloadProgress fails; in these scenarios memory for the read buffer can’t be allocated in advance and must be allocated as data is received. Finally, OnOpen calls ReadBody. void URLLoaderHandler::OnOpen(int32_t result) { if (result != PP_OK) { ReportResultAndDie(url_, "pp::URLLoader::Open() failed", false); return; } int64_t bytes_received = 0; int64_t total_bytes_to_be_received = 0; if (url_loader_.GetDownloadProgress(&bytes_received, &total_bytes_to_be_received)) { if (total_bytes_to_be_received > 0) { url_response_body_.reserve(total_bytes_to_be_received); } } url_request_.SetRecordDownloadProgress(false); ReadBody(); } ReadBody creates another CompletionCallback (a NewOptionalCallback) and passes it to ReadResponseBody, which reads the response body, and AppendDataBytes, which appends the resulting data to the previously read data. void URLLoaderHandler::ReadBody() { pp::CompletionCallback cc = cc_factory_.NewOptionalCallback(&URLLoaderHandler::OnRead); int32_t result = PP_OK; do { result = url_loader_.ReadResponseBody(buffer_, READ_BUFFER_SIZE, cc); if (result > 0) { AppendDataBytes(buffer_, result); } } while (result > 0); if (result != PP_OK_COMPLETIONPENDING) { cc.Run(result); } } void URLLoaderHandler::AppendDataBytes(const char* buffer, int32_t num_bytes) { if (num_bytes <= 0) return; num_bytes = std::min(READ_BUFFER_SIZE, num_bytes); url_response_body_.insert( url_response_body_.end(), buffer, buffer + num_bytes); } Eventually either all the bytes have been read for the entire file (resulting in PP_OK or 0), all the bytes have been read for what has been downloaded, but more is to be downloaded ( PP_OK_COMPLETIONPENDING or -1), or there is an error (less than -1). OnRead is called in the event of an error or PP_OK. Displaying a result OnRead calls ReportResultAndDie when either an error or PP_OK is returned to indicate streaming of file is complete. ReportResultAndDie then calls ReportResult, which calls PostMessage to send the result back to the HTML page.
https://developer.chrome.com/native-client/devguide/coding/url-loading
CC-MAIN-2016-22
en
refinedweb
. How to Get Type Name without full namespace in C# ? If you want to find the full name of the type in C# , you can use the typeof keyword to do it as shown in the below code snippet. var str1 = typeof (Author).ToString(); The problem with this method is that it displays the full name along with the namespace. If you want to get only the class name without namespace , you can use the Name property of MemberInfo class as shown below. var str2 = typeof(Author).Name; Console.WriteLine("using typeof(Author).Name results in " + str2); Constant->static and readonly->instance field in C# There; } } 3 options to return multiple values from a method in C# There"; } Failed to compare two elements in the array in C# When you want to use the Sort method of the List<T> (without parameters) , you should be implementing the IComparable<T> interface. Q&A #47 – Do you know what is Lightning ? .NET Developers , have you heard of this term “Lightening” ?. BUILD 2016 Live Streaming Tilde (~) Symbol in the Enum definition in C# During a casual discussion with one of my friend , I came across a question on whether the tilde(~) symbol can be used in the enum definition as shown below. public enum TypeData { All = ~0, None = 0 } Windows 10 Step by Step Tutorial – Copy and Paste Functionality IntroductionThere are various ways in which you can exchange data between apps in Windows Apps. Some of the techniques include - Copy and Paste - Share Contract - Drag and Drop Tutorial – How to integrate Copy and Paste functionality in UWP apps?Integrating the Copy and paste functionality in a UWP app is a four step process 1. Add the namespace “Windows.ApplicationModel.DataTransfer” to the code behind file. Create an instance of the DataPackage class and specify the RequestedOperation property to the desired functionality. DataPackage dataPackageobj = new DataPackage { RequestedOperation = DataPackageOperation.Copy }; How to run Visual Studio 2015 in safe mode? There are times when you Visual Studio instance might not start up correctly. One of the best option during this time is to use Microsoft Visual Studio in safe mode. By running the Visual Studio in safe mode, you work with the default environment. In this mode, all the third party extensions would be disabled. This would help you identify if at all the problem was caused by any third party add-ins. How to run Visual Studio 2015 in safe mode? To run Visual Studio 2015 in safe mode, open Developer Command Prompt for VS2015 and type the command devenv.exe /safemode After the command is executed, Microsoft Visual Studio 2015 will start in safe mode which is indicated in the title bar of Visual Studio 2015.
http://developerpublish.com/
CC-MAIN-2016-22
en
refinedweb
CPAN::Testers::Metabase - Instantiate a Metabase backend for CPAN Testers version 1.999002 use CPAN::Testers::Metabase::Demo; # defaults to directory on /tmp my $mb = CPAN::Testers::Metabase::Demo->new; $mb->public_librarian->search( %search spec ); The CPAN::Testers::Metabase namespace is intended to span a collection of classes that instantiate specific Metabase backend storage and indexing capabilities for a CPAN Testers style Metabase. Each class consumes the Metabase::Gateway role and can be used by the Metabase::Web application as a data model. See specific classes for more detail:
http://search.cpan.org/~dagolden/CPAN-Testers-Metabase-1.999002/lib/CPAN/Testers/Metabase.pm
CC-MAIN-2016-22
en
refinedweb
On Mon, Apr 1, 2013 at 11:16 PM, Daniel Pocock <daniel@pocock.com.au> wrote: > On 01/04/13 22:04, John Paul Adrian Glaubitz wrote: >> On 04/01/2013 09:59 PM, Daniel Pocock wrote: >>> Agreed, but that doesn't complete the picture, as libgl1-mesa-glx >>> doesn't depend on libgl1-mesa-dri: >>> >>> $ apt-cache depends libgl1-mesa-glx >>> ... >>> Recommends: libgl1-mesa-dri >>> >> >> Well, "Recommends" are installed by default, aren't they? However, I'm > > Not during upgrade or dist-upgrade operations. This is specifically an > upgrading issue. From man apt-get: > > " upgrade: > ... under no circumstances are currently installed packages removed, > or packages not already installed retrieved and installed." Correct for apt/squeeze, partly-wrong for apt/wheezy (since 0.8.15.3). A package requiring a new recommends which is in a non-broken policy state previously will be held back just like other packages requiring a new depends in apt/wheezy. In apt/squeeze the policy will break, which you could fix with "apt-get install --fix-policy", but that is going to fix ALL recommends. We are going to be "fine" in this regard as many packages have a new dependency in a new release (upgrade is mostly for between releases). In this case it is at least "multiarch-support". > "dist-upgrade: > ... intelligently handles changing dependencies with new versions of > packages" dist-upgrade on the other hand installs new recommends since the introduction of recommends. Keyword is "new": If you had recommends disabled previously and/or removed a recommends apt will not install this recommendation again. (It compares the recommends list of the old version with the new version and only uninstalled recommends present in the new, but not in the old version are marked for installation). Of course, if the recommends isn't installable you will still get a solution which doesn't include this recommends which will be displayed as usual. You have to install it later by hand then as it now an old recommends … (In stable, uninstallability shouldn't happen though) I guess the confusion comes from the word "dependencies": In APT namespace "dependency" means any relation which is allowed; not just a "Depends". So the sentence should be read as "… handles changing Pre-Depends, Depends, Conflicts, Breaks, Replaces, Provides, Recommends (if enabled, default yes) and Suggests (if enabled, default no) with new versions …" (for the sake of completion: Enhances are not handled) It's just that a user shouldn't really be required to know what those are. (if you digg deaper [usually in non-user facing texts] you will come across "hard", "important", "soft", "negative" and "positive" dependencies to complete the confusion. I will leave it as an exercise for now which subsets are meant with those adjectives) Best regards David Kalnischkies
https://lists.debian.org/debian-devel/2013/04/msg00090.html
CC-MAIN-2016-22
en
refinedweb
This chapter provides guidelines and best practices for designing and securing Oracle Application Development Framework (Oracle ADF) view objects and other supporting business component objects for use by Oracle Business Intelligence Applications. This chapter includes the following sections: Section 59.1, "Introduction to View Objects for Oracle Business Intelligence Applications" Section 59.2, "General Design Guidelines" Section 59.3, "Understanding Oracle Business Intelligence Design Patterns" Section 59.4, "Designing and Securing Fact View Objects" Section 59.5, "Designing and Securing Dimension View Objects" Section 59.6, "Designing Date Dimensions" Section 59.7, "Designing Lookups as Dimensions" Section 59.8, "Designing and Securing Tree Data" Section 59.9, "Supporting Flexfields for Oracle Business Intelligence" Section 59.10, "Supporting SetID" Section 59.11, "Supporting Multi-Currency" The view objects that are designed and created for Oracle Business Intelligence Applications (Oracle BI Applications) are shared between Oracle Transactional Business Intelligence and Oracle BI Applications. The Oracle BI Applications warehouse is populated from Fusion application databases using an ETL (extract, transform, and load) process. The ETL process uses the tool to source data from the source system (Oracle Fusion Applications) to the target Oracle BI Applications tables. The extract from the source system is done using the Oracle Application Development Framework (Oracle ADF) view objects. Figure 59-1 illustrates the Oracle Business Intelligence architecture. Oracle BI Enterprise Edition (Oracle BI EE) needs to efficiently access data from two or more master/detail-linked view objects in order to aggregate, present, or report on that combined data set. An essential requirement is to efficiently retrieve the multiple-levels of related information as a single, flattened query result, in order to perform subsequent aggregation or transformation on it. Oracle ADF Composite View Object API allows the caller to create a new view object at runtime that composes the hierarchical results from two or more existing view-linked view objects into a single, flattened query retrieving the same effective set of data. From a performance perspective, such queries would need to be performed on low-level data in Oracle BI EE, since the Oracle ADF layer does not directly support aggregation. This would generally slow query performance down. Also, going through additional servers (that is, JavaHost and Oracle ADF) in the network would also be slower than directly querying the database. Therefore, the SQL Bypass feature has been introduced to directly query the database and push aggregations and other transformations down to the database server, where possible, thereby reducing the amount of data streamed and worked on by Oracle BI EE. The SQL Bypass functionality in Oracle BI EE utilizes the Composite View Object API feature to construct and return a flattened SQL Bypass query that incorporates all of the required columns, filters, and joins required by the Oracle Business Intelligence query. Oracle BI EE then executes this query directly against the database. When designing view objects for Oracle Business Intelligence Applications, you should use the following guidelines with regards to entity objects, associations, view objects, view links, and view criteria. An entity object represents a row in a database table and simplifies modifying its data by handling all data manipulation language (DML) operations for you. It can encapsulate business logic for the row to ensure that your business rules are consistently enforced. Entity objects are required for all Oracle Business Intelligence view objects to support SQL pruning of declarative view objects and to leverage many Fusion specific features. For more information see "Creating a Business Domain Layer Using Entity Objects" in Oracle Fusion Middleware Fusion Developer's Guide for Oracle Application Development Framework. All attributes from the physical table (with the exception of special, highly sensitive attributes) should be exposed on the entity objects. An association reflects relationships between entity objects and can be by either reference or composition. All view objects composed of multiple entity objects are flattened using entity object associations. For more information about associations, see "Creating Entity Objects and Associations" in Oracle Fusion Middleware Fusion Developer's Guide for Oracle Application Development Framework. For more information about flattening, see Section 59.3.1, "Understanding Flattened View Objects." A view object represents a SQL query. You use the full power of the familiar SQL language to join, filter, sort, and aggregate data into exactly the shape required by the end-user task. For more information, see "Defining SQL Queries Using View Objects" in Oracle Fusion Middleware Fusion Developer's Guide for Oracle Application Development Framework. This section includes some technical requirements, how to use declarative SQL mode, and guidelines regarding view object attributes and outer joins. The following are the technical requirements driven by use of the Composite View Object API, SQL Bypass, and SQL Pruning. Composite View Object API Use view links to establish relationships between view objects. View links must not contain custom SQL functions such as TRUNC and BETWEEN. Use the BI_JOINTYPE custom view link property to define outer joins on view links. There is no support for Java or Groovy calculated attributes. Programmatic view objects, transient view objects, and transient attributes are not supported. SQL Bypass Full SQL can be obtained at runtime using vo.getQuery(). There is no support for transient attributes. View objects must not contain bind parameters. There is no support for Java logic or Java calculated attributes. Do not apply data security view criteria programmatically. If you are using Multi-Organization Access Control (MOAC) you must not enable MOAC for the view objects for Oracle Business Intelligence Applications. You should use the underlying Fusion Data Security instead. For more information, see "About Specifying a SQL Bypass Database" in Oracle Fusion Middleware Metadata Repository Builder's Guide for Oracle Business Intelligence Enterprise Edition. SQL Pruning You should create your view objects in Declarative SQL Mode. For more information about Declarative SQL Mode, see "Working with View Objects in Declarative SQL Mode" in Oracle Fusion Middleware Fusion Developer's Guide for Oracle Application Development Framework. You should set primary entity usage to identify the fact and dimension grain because primary entity usage cannot be pruned. You must set the Selected in Query property for non-primary key and unsecured attributes to false. You should limit view criteria on non-primary entity derived attributes because attributes used in applied view criteria cannot be pruned. You should limit order by clauses on non-primary entity derived attributes because attributes used in applied order by clauses cannot be pruned. As a general rule, you should include all attributes from the underlying primary and reference entity objects in your view objects. Flex attributes are an exception from this rule. These attributes are not required because they are exposed using the Flex Extender utility. You should only include name and description attributes from the reference entity objects that are included to only resolve ID and Code columns. You should include Standard Who Columns from all participating entity objects on your view objects for Oracle Business Intelligence Applications. This is to support Oracle BI Applications's Change Data Capture requirements. Exceptions include entity objects that are only included to resolve ID and Codes into meaningful descriptions. For example, entity objects included to only resolve Business Transactional Intelligence-only attributes into a view object using entity object associations. Table 59-1 shows the Standard Who Columns. You should set the Selected in Query property to be false on all non-primary key view object attributes. You should resolve duplicate attribute names on view objects, which are made up of multiple entities, by using an attribute prefix. Use an alias property as both the table alias and column alias in the SQL as well as the view object attribute prefix. For example: The POLinesVO includes both the HeaderEO and the LinesEO. The LinesEO is specified as the primary entity on POLinesVO. The HeaderEO is specified as a reference entity. This view object includes HeaderId attributes from both HeaderEO and LinesEO. To avoid duplication of attributes across Header and Lines entities, an entity object alias is specified. For example, Header and Lines for HeaderEO and LinesEO respectively. The POLinesVO is then created using Header as the prefix for all Header attributes, and Lines as the prefix for all Lines attributes. For example, HeaderHeaderId and LinesHeaderId; HeaderBusinessUnitId and LinesBusinessUnitId. Use the following guidelines to resolve view object foreign keys: If the foreign key is a dimension, Oracle Business Intelligence requests a dimension view object and a view link to the dimension view object. If the foreign key is a warehouse domain, Oracle BI Applications requests a view object for ETL. No view link is requested. Oracle BI EE lookup functionality is used to resolve foreign keys. If the foreign key is neither a dimension nor a warehouse domain, you should should resolve the foreign key using entity object associations. For MLS-enabled entities, ID and Code attributes should be resolved using _VL views. An outer join is generally required when creating a view object based on multiple entity objects, so as to handle situations when not all of the reference entities' values are present. The specific outer join type (left, right, or full) used should be determined based on the expected data relationships between the primary and reference entities. Note, however, that in some cases, security considerations will require an inner join, instead. (For an example, see Section 59.4.1, "Designing Fact View Objects.") If a join is required to resolve an ID or Code attribute, use a _VL view instead. View links are required to flatten view objects using the Composite View Object API. To define outer joins on view links, you must add the BI_JOINTYPE custom property on the view link definition. Valid values for this custom property include: LEFTOUTER RIGHTOUTER FULLOUTER INNER (default) For more information, see "Working with Multiple Tables in a Master-Detail Hierarchy" in Oracle Fusion Middleware Fusion Developer's Guide for Oracle Application Development Framework. A view criteria identifies filter information for the rows of a view object collection. Required filters for view objects for Oracle BI Applications should be created using named view criteria. This includes: Security filters For more information about security filters, see Section 59.3.4, "Understanding Business Intelligence Filters." Functional filters for Transactional Business Intelligence or Oracle BI Applications. Only filters required by both Transactional Business Intelligence and Oracle BI Applications should be created for view objects that are shared by both products. Filters to distinguish different logical entities based on the same entity object. (For single entity object view objects). For more information, see "Working with Named View Criteria" in Oracle Fusion Middleware Fusion Developer's Guide for Oracle Application Development Framework. This section discusses Oracle Business Intelligence design patterns including flattened view objects, fact-dimension relationships, self referencing entities, filters, and translations. The grain of a fact table represents the most atomic level by which the facts may be defined. The fact or dimension grain required for Oracle Business Intelligence modeling should determine the flattening required for view objects. You should only create flattened view objects for fact or dimension levels required for either Transactional Business Intelligence or Oracle BI Applications. For example, if neither Transactional Business Intelligence nor Oracle BI Applications requires (purchase order) PO Shipments, then do not create a flattened POShipmentsVO. When flattening entity objects in a view object, include only entity objects that do not change the grain of the fact or dimension. For example: If attributes from a backing requisition line are needed on the POLinesVO, then the Requisition Line entity object should only be included in the flattened POLinesVO if the join does not change the grain of the POLinesVO to Requisition Line. A 1:n relationship requires two view objects only if you want to aggregate attributes from the child and store the result at the grain of the parent. Flattened view objects should be modeled in the Oracle Business Intelligence layer as a single logical table with multiple logical table sources. You should follow these rules when designing and creating fact and dimension view objects: Create separate view objects for fact entities and dimension entities. Do not flatten relationships between facts and dimensions into a single view object. Create a view link between the FactVO and the DimensionVO. Specify the FactVO as the source of the view link. Specify the DimensionVO as the target of the view link. In the case of a fact view object where the self-joins represent two different but functionally important objects, you should create separate view object instances that represent the two objects. You should then define a view link between them. If the self-join does not need to be represented as separate objects, you should resolve the Foreign Key ID column to a more meaningful column. For example: The InvoiceheaderVO contains the following attributes: InvoiceId InvoiceNum TaxRelatedInvoiceId CreditedInvoiceId If you decide that these should be modeled as three separate facts, then you create two additional view instances, TaxRelatedInvoicesVO and CreditdInvoicesVO, with view links to the I nvoiceHeaderVO. If you decide that they do not need to be modeled as separate objects, then you should create the two additional joins inside the InvoiceHeaderVO to bring in TaxRelatedInvoiceNum and CreditedInvoiceNum. Row and Column flattening is required for view objects with self-joins that are modeled as dimensions in Oracle Business Intelligence Applications. You should determine the level of flattening required on a case-by-case basis. Only filters that are common to both Transactional Business Intelligence and Oracle BI Applications should be defined on shared view objects. If Transactional Business Intelligence requires additional filtering for an Transactional Business Intelligence specific application then it should be defined on the Oracle BI EE layer. If Oracle BI Applications needs to filter data from a shared view object for extraction, these filters need to be defined in the ETL layer. Also note that view criteria cannot be pruned from the SQL at runtime. All Fusion translatable entities with a corresponding _TL table require entity objects based on both _B and _TL entity objects. You should create a flattened view object to join _B and _TL entity objects. Oracle BI Applications performs ETL (extract, transform, and load) processes from the flattened view object with no additional filters. However, Transactional Business Intelligence requires an additional session language filter in Oracle Business Intelligence layer. Note:The entity object associations required for ID and Code resolutions to Multi-Language Support-enabled entities should use a _VLview. All date effective entities for a logical fact or dimension should be flattened and adhere to the following: Date effective entity objects and view objects should be marked as such according to Oracle ADF. Flattening requirement excludes scenarios where other design considerations require not flattening the entity objects in the view object. For example, 1:n relationships. Both entity objects are date effective. The PersonsVO should be flattened to include both PersonEO and PersonDetailEO and should also be marked as Date Effective. In other words, there should be a single current person details record for each person record. For more information, see "How to Store Data Pertaining to a Specific Point in Time" in Oracle Fusion Middleware Fusion Developer's Guide for Oracle Application Development Framework. Oracle BI Applications identifies any date effective entity objects from which historical information is needed; single view object flattening does not meet their requirements. To compensate, you need to: Create a separate view object for these entity objects. The entity object is removed from the flattened view object. Create view links to join these view objects. You must only include one historical entity object for any given view object in Oracle BI Applications. You should still mark view objects as date effective so that Transactional Business Intelligence can share and date effective predicate can be applied in the Oracle Business Intelligence layer. Separate view objects should be created for fact entities and dimension entities. Relationships between facts and dimensions should not be flattened into a single view object. Instead, you should create a separate FactVO and DimensionVO and then create a view link between them. Specify the FactVO as the source of the view link, and the DimensionVO as the target of the view link. A flattened view object should be created for each logical fact grain in Transactional Business Intelligence and Oracle BI Applications. For example: A purchase order contains four fact levels: Header, Lines, Shipments, Distributions. Flattened view objects should be created to represent each of the four fact grains, as shown in Table 59-2. Entity objects can be included in flattened view objects as required, as long as the view object grain does not change. Note:View links are not required between these view objects. Join Type for Multi-Level Facts Join types on entity associations between multi-level facts should be inner joins. This is because there are some security impacts if entity associations are modeled as outer joins. For example: To support the query "Show me all PO headers with no associated distributed rows". If an outer join is used, you would need to implement security on both the header and the distribution entities in the DistributionVO. This would prevent pruning of the header entity from the DistributionVO; it is also a change from current guidelines to only secure the primary entity. The following are general guidelines for securing fact view objects. The sub-sections describe different design patterns that may arise for Oracle Business Intelligence use cases. Also included are solutions for each design pattern. Fusion Data Security view criteria should be applied to the fact view object. For more information about Fusion Data Security view criteria, see Section 48.3.2, "How to Secure Rows Queried By Entity-Based View Objects." The data security view criteria should contain: Privilege – Relevant object privilege. Object – Object being secured. For Multi-Organization Access Control (MOAC) style grants, the object being secured is Business Unit, based on the way MOAC grants are authored. For other grants, it can be the transactional object. Alias – Alias for object. Alias is mandatory for view objects for Oracle Business Intelligence Applications privileges. For example, For Payment Invoices fact view object using Business Unit security (MOAC style), the privilege is: "FNDDS__AP_MANAGE_PAYABLES_INVOICE_DATA__FUN_ALL_BUSINESS_UNITS_V__BU" The fact view object requires an entity object for securing the table. (BU in this example). The join between the fact and the securing table should be properly resolved. The alias used in the view criteria should be that of the entity object corresponding to the Object in privilege (BU in this example). If a non-MOAC grant is made for a transaction object, such as, for example, the Payment fact of the Fusion Incentive Compensation Management (ICM) Application, the object and alias refer to the ICM Payment entity. For example: "FNDDS__VIEW_INCENTIVE_COMPENSATION_PAYSHEET_DATA__IC_INCENTIVE_COMPENSATION_PAYSHEET__ICPAY" In Oracle Fusion Applications, transaction data can be secured by more than one entity, based on the role used to access the transaction data. For example, consider the case of the Fusion Incentive Compensation Management (ICM) Application, in which: Role Incentive Compensation Paysheet Management Duty can see Incentive Compensation paysheets for the participants for whom they are responsible. Role Incentive Compensation Process Management Duty can see Incentive Compensation paysheets for the business units for which they are authorized. In the above case, because the view object for the transaction object implements single data security privilege, the privilege should be able to provide access based on business unit as well as participants. Building this privilege provides a logical filter similar to: "for the participants for who they are responsible" OR "for business units for which they are authorized" You can achieve this by creating a new privilege and having two policies created using the same privilege. One policy should be created using instance set to provide "for business units for which they are authorized", and the second policy should be created using instance set to provide "for the participants for who they are responsible". The policies should be granted to existing roles. To secure transaction by more than one entity: The following steps are based on the Fusion ICM Paysheet use case. Create a new data privilege titled View Incentive Compensation Paysheet Data. Author the following data security policies, using existing duties and the new data privilege defined in Step 1: <Incentive Compensation Paysheet Management Duty> can <view> <Incentive Compensation Paysheets> <for the participants for who they are responsible> <Incentive Compensation Process Management Duty> can <view> <Incentive Compensation Paysheets> <for business units for which they are authorized> Define the following grants: For the data security policy described in Step 2a: For this data security policy, you should attach the View Incentive Compensation Paysheet Data data privilege to the same FND_MENU that contains the grant for the Manage data privilege. This grants VIEW privileges to the same roles that have Manage privilege, reducing the number of grants to be managed. For the data security policy described in Step 2b: For this data security policy, you would create a non-MOAC grant against the Incentive Compensation Paysheet object against the business unit (BU) data role. This grant is parameterized instance set based, with the instance set returning Paysheet data by BU, using BU on the data role as the parameter value. This grant carries only the VIEW data privilege. Note:This is a data role grant and the role and grant is generated during the implementation phase using the data role template. Use privilege View Incentive Compensation Paysheet Data in data security view criteria in Incentive Compensation Paysheet. The view criteria should be like Example 59-1 assuming IC_INCENTIVE_COMPENSATION_PAYSHEET is the object registered in FND_OBJECT and ICPAY is the alias used for the entity object. Caution:With regards to the above proposal, if a user happens to have both Incentive Compensation Paysheet Management Duty and Incentive Compensation Process Management Duty roles granted; the Business Intelligence report will show the UNIONof data, such as data for authorized business units *** AND*** data for responsible participants. Whether such reporting behavior is acceptable should be decided on a case by case basis. For Oracle BI Applications, the UNION effect for the above example, based on Oracle Fusion Incentive Compensation Management reporting (Access by Participants and access by business units), must be achieved in Oracle BI EE based on the OR join for individual dimensions. This can potentially be achieved by using two separate groups (one for business unit and another for participants) and having a user access to both groups (since predicates are ORed across Oracle BI EE groups). There are other use cases that fall into same design pattern of a transaction being secured by multiple entities and Oracle Business Intelligence implementation needing UNION access. For example, in Oracle Fusion Projects, the transaction table Expenditure Item is secured by Business Unit as well as by Project. For Oracle Business Intelligence reporting, the query on Expenditure Item should return rows for authorized business units for a user as well as authorized project for user. In general, while the Oracle Business Intelligence use cases for transaction being secured by multiple entities will be similar; application teams can make their own decisions about how they implement an Oracle Business Intelligence solution. For example, in the case of the Oracle Fusion Incentive Compensation Management and Oracle Fusion Projects applications, you can implement different solutions for achieving the same end results by having different styles of grants and roles. Therefore, application teams should choose their own implementation based on their existing roles and privileges, and the approach they want to take for Oracle Business Intelligence solution. When transactions are analyzed in context of dimensions, sometimes the dimensions have their own security, which is not applicable for usage with the transaction. For example, Grade data is secured using Fusion data security. When analyzing Assignment data, relevant information from the Grade dimension is required; however, the data security for the Grade dimension is not applicable when being used for analyzing assignments. Instead, the Grade dimension behaves as an unsecured source of data when used with assignment fact. A solution for this use case is to create two view objects for the dimensions for which security is not required when analyzing fact. The two view objects should form two logical table sources (LTS) for the dimensions: The first dimension view object implements data security. This is used in dimension browsing and can include all columns required for dimension browsing. To ensure BI EE uses the secured version for dimension browsing, make sure it is higher up in the list of Logical Table Sources (LTS) than the unsecured one. The second dimension view object should be unsecured. To ensure that the unsecured view object is used to combine with the fact, physical joins should be defined between the physical fact table and the physical table for the unsecured version of the dimension view object. Caution:Dimensions, for which unsecured view objects are created, may contain sensitive attributes. If this is the case, then you must make sure that the unsecured view object does not contain these sensitive attributes. Dual (secured and unsecured) view objects are only required for entities that fall in this design pattern. Entities not requiring both secured and unsecured access do not require dual view objects. Analysis of a fact may need reference information from another fact. In Transactional Business Intelligence, this is handled by creating a degenerate dimension for a fact whose attribute information is used in other facts. The degenerate dimension is just a logical layer entity in the RPD and it uses the same view object as the underlying fact. As a result, the data security for degenerate fact is the same as that of underlying fact table. This may create a problem when the degenerate dimension is used in another fact that has different security than the degenerate dimension (or more accurately, the fact underlying the degenerate dimension). For example: There is a degenerate dimension, Dimension A on top of Fact A. Dimension A is used in Fact B as a reference. Fact B is secured using a different dimension (or different privilege) than Fact A, which was used to source for Dimension A. In such cases, the security of both Fact B and Fact A should be applied; where as the desired result was just to apply security of Fact B. Multi-Org Access Control (MOAC) ADF infrastructure enables Fusion transaction applications to implement business unit based data security. Because the Oracle Business Intelligence technical stack works on the view definitions, the ADF Business Components MOAC infrastructure does not work for view objects for Oracle Business Intelligence Applications. These view objects should instead use underlying Fusion Data Security to support business unit based security. This section discusses how to design and secure dimensions. A flattened view object should be created for each logical dimension grain in Transactional Business Intelligence and Oracle BI Applications. For example, for the Geography dimension, the following view objects are required to represent each dimension grain: Zip Grain — Zip Code City Grain — City + Zip Code State Grain — State + City + Zip Code These should be modeled as a single Geography logical dimension table with multiple logical table sources, one for each of the dimension grains. Create a view link between the Transactional Business Intelligence fact view objects that have Business Unit dimensionality to the common business unit dimension view object based on Business Unit ID. If the dimension needs to be secured, then the FND view criteria should be applied on the dimension view object. The following use case, is an example of how you should secure dimensions that are composed of multiple entities. The Dimension Inventory Organization is composed of the following three entities: InventoryOrgParameters HrOrganizationUnits HrLocations Human Resources (HR) entities may have their own security. However, for the InventoryOrgParameters entity, only the security defined by inventory product Manage Inventory Org Parameters should be used. In other words, data security on HR entities should be ignored when consumed in InvOrg. This use case is similar to Section 59.4.2.2, "Securing Transactions Different from Securing Dimensions," where unsecured view objects are used for dimensions. The Dimension Business Unit is used to secure transaction data. When used in conjuction with transaction data, a secured version of the Business Unit, which can return business units allowed for a user for a function, is required. For example, a secured version of Business Unit is required to populate init block security variables for Oracle BI Applications. However, if a user needs to browse only the business unit data, the user is allowed to see all dimensions. Therefore, it is deemed an unsecured dimension when dimension browsing in Oracle BI Applications. To use an unsecured view object for dimension browsing, make sure it is higher up in the list of LTSs than the unsecured one. Separate view objects should be created for primary dimension entity and multi-valued dimension attribute entities. For example: Using the Person model, the following view objects should be created: PersonVO — Person Only PersonAddressesVO — Addresses Only PersonPhonesVO — Phones Only. The following view links establish relationships between view objects: P ersonToAddressesVL — PersonVO -> PersonAddressesVO PersonToPhonesVL — PersonVO -> PersonPhonesVO Note:The above example uses the Person model with a person having address and phone. Keep in mind that Transactional Business Intelligence models only the primary address and phone number while Oracle BI Applications can model more than one address and phone number per person. Junk dimensions should not be directly sourced from view objects. Oracle BI Applications should build them from Fact Stage tables. Transactional Business Intelligence should build them from the degenerate attributes in Fact tables. Mini dimensions should not be sourced from view objects. Oracle BI Applications should build them from Dimension tables. There are a number of situations in which a secured dimension view object must be deployed with an accompanying unsecured dimension view object. In this case, the term unsecured does not simply mean that security is disabled, but also that a subset of the column set of the secured dimension view object may also be excluded from the unsecured version. Generally, the strategy for developing and deploying a pair of corresponding dimension view objects, where one is secured and the other unsecured, consists of the following: A base dimension view object satisfying the basic, functional requirements for data retrieval is initially developed. The base dimension view object is used to create a secured dimension view object by using the methods and strategies described earlier in this section. An unsecured dimension view object is developed by manually creating an exact copy of the original base dimension view object. The unsecured dimension view object is named <VO Name>ListPVO, where <VO Name> is the name of the base dimension view object. The unsecured dimension view object is modified so as to exclude sensitive columns from its column sets. The unsecured dimension view object is deployed in the same application module as its associated secured dimension view object. Consuming applications must build View Links to both the secured and unsecured dimension view object definitions. Once the secured and unsecured dimension view objects have been deployed, you can begin developing models based upon them in Oracle Business Intelligence. This section discusses the gregorian calendar as well as the special handling that is required for fiscal calendar, projects calendar, Timestamp columns, and role-playing data dimensions. Date dimension view objects for the gregorian calendar are delivered through the ATG libraries. You should create a view link between the gregorian calendar day level view object and all the facts that join with the date dimension. Create the view link with the Fact view object as the source and the day level view object as the target. For all other calendars needed for the fact in a particular functional area, a view link should be created to the time dimension at the day level of the fact. For example, if the fact is at day level in Financials and the reporting calendar is fiscal (in addition to gregorian), view links should be created to the day level of the fiscal calendar. If the fact is at the day level, you should create view links to the day level of the fiscal calendar only. For all facts at the day level, the view link between the Fact view object to the Day Level flattened view object should include the ADJUSTMENT_PERIOD_FLAG = N condition to avoid double counting if the same day belongs to a normal period as well as an adjusting period. Projects facts that need to be analyzed by the fiscal calendar requires a view link between the fact and the day level of the fiscal calendar on the date. Also required is a view link between the fact and the General Ledger on the Ledger ID column using the Fun_all_business_units_V table that is present in the fact side. Projects facts that need to be analyzed by the projects calendar requires a view link between the fact and the day level of the projects calendar on the date. Also required is a view link between the fact and the pjf_bu_impl_all_v table on the Business Unit Id. If the date column of a fact view object involves timestamp then teams will need to create a new SQL derived attribute to populate the date without the timestamp. A view link will also need to be created using the new date column of the fact view object and day level time dimension view object. If the fact view object date column does not have the timestamp then it can be used for creating the view link. If a lookup type is used as a dimension in Transactional Business Intelligence, you must deliver the dimension view object as follows: Create a <Product short name>LookupsVO. This new view object should be based on the product specific lookup view on FND_LOOKUPS. If a view to FND_LOOKUPS is not available or not required for online transaction processing, the view object should be based on FND_LOOKUPS directly with an additional filter on all included Application IDs for the product. View criteria should be created for each Lookup Type. Create a view instance using the above view criteria for each dimension. Foreign keys to low cardinality lookups, such as FND_LOOKUPS, should not be resolved in fact or dimension view objects. These should be resolved in the logical layer through the lookup function. Business Transactional Intelligence-only low cardinality lookups should be resolved using entity object associations based on a _VL view. Application trees managed by Fusion tree management infrastructure may be exposed to Oracle Business Intelligence systems, such as Oracle BI EE, for analysis. This is done by providing a view object that contains a column-flattened version of tree data joined with tree data sources. Such a view object is called a column-flattened view object for Business Intelligence (BICVO). Designing and securing tree data for Oracle Business Intelligence involves the following activities: Designing a column-flattened view object Customizing the FND table structure and indexes Using declarative SQL mode to design view objects Securing Oracle ADF view objects for trees Column-flattening is generally available for level-based trees. For those trees that may be exposed to Oracle Business Intelligence systems, such as Oracle BI EE, column-flattening for value-based trees also is available. Figure 59-2 illustrates a generic example of a value-based tree. Each node has a unique identity, in this case denoted by dot-separated numbers that correspond to the node's relative ordering in the overall parent-child structure. Such value hierarchies may be arbitrarily recursive (in terms of recurring node types), and are usually ragged, or unbalanced. There is only a general concept of "level" in these hierarchies, which refers to the path distance (or depth) from the root node to some specified node. Two nodes the same distance from the root are thought of as being at the same level. However, unlike true level-based trees, there is no requirement for nodes at the same level to possess a common set of properties. In fact, a node in a value-based tree may have any arbitrary collection of properties. When these trees are used to represent dimensional hierarchies, facts, metrics, or transactions, values may be joined to any node. There is no constraint that facts or transactions only be joined to lowest-level nodes, as is usually the case with level-based trees. The example value-based tree shown in Figure 59-2, also has multiple top-level or root-level nodes. Since it has five levels (or equivalently, a maximum depth of four), a column-flattened representation of this tree requires a minimum of five columns. This is illustrated in Table 59-3. Note:In practice, you would never have single node trees. However, root nodes 2.0 and 3.0 are in Figure 59-2 to simply illustrate multiple top-nodes. The following conventions apply to the logical column-flattened representation shown in Table 59-3. The first column (C0) contains a complete enumeration of each node in the tree. In this example, each node is represented by the value of its unique identity. Having the unique identity of each node of the hierarchy represented exactly once in the C0 column means that it is always possible to directly address each node, such as for purposes of joining with a transaction or measure, or for performing a calculation on that node. The last column (C4 in the example) always represents the root node of some rooted ancestral path of the tree. The intermediate ancestral path nodes between a given node in the C0 column and its ancestral root node in the C4 column, is represented by columns C1 through C3. Each column stores a reference to some node of the ancestral path, descending from C3 toward C0, filling each column (from right to left) with a reference to the next child node of the path. When a reference to the C0-th column node occurs, this reference is then repeated, if necessary, so as to pad the remaining columns until the C0 column is reached. Having the complete ancestral path, with unused columns padded toward the C0 node value, facilitates more efficient drill down operations. There is no implied ordering of the rows in the column-flattened representation. The complete hierarchy is represented by the table content, and a normalized representation can always be inferred or reconstructed from the flattened data set. As far as Fusion tree management is concerned, the column-flattened representation always consists of a number of columns greater than, or equal to, the depth of the tree. If this were not the case, you would need a strategy for pruning or condensing the tree (for example, removal of intermediate nodes from the ancestral paths). On the other hand, having the number of columns exceed the depth of the tree is never problematic, because of the repeated padding of C0 node values. ATG services allows you to specify some fixed maximum depth of up to 32 levels when defining a tree. For example, if you specify a 20-level tree, your column-flattened representation will contain 20 columns, C0 through C19, with padding of values toward the leaf, as shown in Table 59-4. Think of the tree in Figure 59-2 as a true level-based tree, with fixed levels, single top-nodes, and all leaf nodes residing at the same lowest level of the tree (such as level zero, represented by column C0). In this case, you would actually have three separate trees, and the tree rooted at node 1.0 would have the logical column-flattened representation shown in Table 59-5, assuming the same "pad toward leaf values" scheme as with the value-based tree. The notion of distance from the root is still relevant, even though all of the leaf nodes are assumed to reside at the same level (level zero, or C0). Attributes from the column-flattened version of the tree data use standard ADF Business Components attribute naming conventions. Attributes from the tree data sources also use the same naming convention, but are prefixed with DepN, where N is the zero-based height of the node within the tree; for example, Dep7EmployeeName or Dep13ProjectName. The Dep0 prefix is used for leaf nodes. The following procedure is a summary of the overall process of defining and generating declarative BICVOs for trees. For more detailed information about the strategy for creating these BICVOs, see Section 59.8.4, "Guidelines for ATG-Registration and BICVO Generation" and Section 59.8.6, "Securing ADF Business Components View Objects for Trees." To generate BICVO automatically using Tree Management: Ensure that the namespace path /oracle/apps/fnd/applcore/trees/analytics is configured in Oracle Metadata Service (MDS). Example 59-2 shows a sample MDS configuration. Example 59-2 MDS Configuration <adf-mds-config <mds-config <persistence-config> <metadata-namespaces> <namespace path="/sessiondef" metadata- <namespace path="/persdef" metadata- <namespace path="/oracle/apps/fnd/applcore/trees/analytics" metadata- </metadata-namespaces> <metadata-store-usages> <metadata-store-usage <metadata-store <property name="metadata-path" value="/tmp"/> </metadata-store> </metadata-store-usage> </metadata-store-usages> </persistence-config> </mds-config> </adf-mds-config> Ensure that each view object attribute of the tree data source view objects is marked as relevant to Oracle Business Intelligence (or not) via the BI Relevant property that is exposed in the Property Inspector for the view object attribute. Note:By default, only primary key attributes are "BI Relevant". For performance reasons, it is recommended that only those attributes that are really relevant to Oracle Business Intelligence be marked as such to avoid generating very large BICVOs. Ensure that column flattening is enabled by specifying the column-flattened table and, optionally, the entity object for the table while setting up the tree structure. For more information, see Section 19.3.5, "How to Create a Tree Structure." The tree management infrastructure then generates the BICVO for the tree structure into MDS. Secure the generated BICVO using the data security infrastructure. For more information, see Section 59.8.6, "Securing ADF Business Components View Objects for Trees." The generated BICVO includes a special view criteria named FNDDS__BICVO. In order to secure access to data through the BICVO, this view criteria must be enabled for instances of the BICVO in any application module. At runtime, data security rules affecting access to the tree data source view objects are automatically carried over to the BICVO. Note:In Oracle Fusion Applications V1, only filter-based data security rules are supported. In addition, only the "is descendant of" operator is supported. When using Oracle Fusion tree management to create and manage your trees, you should create and register its own, custom versions of the FND_TREE_NODE and FND_TREE_NODE_CF tables. This prevents applications from competing for use of the FND tables. Your custom tables must comply to the following rules: They must have custom, (preferably application-specific) names; for example, PJF_PROJ_ELEMENTS_CF is currently being used by the Projects team to implement a column-flattened table for the Task Hierarchy. The column names and column data types of each custom table must be exactly the same as those of the corresponding FND table. Custom versions of FND_TREE_NODE_CF can define an index on each of the level-based foreign key references to support efficient drill-downs. However, it is understood that certain application query patterns do not necessitate this degree of indexing. Indexing is also not necessary if the column-flattened table is guaranteed to be relatively small. Custom versions of the FND_TREE_NODE_CF should not include the ENTERPRISE_ID column as part of the primary key index defined on the custom table. This is because this column is not currently used by Oracle Fusion tree management. All view objects for Oracle Business Intelligence Applications should be constructed in declarative SQL mode. This ensures that correct SQL pruning can be applied to any composite view object incorporating the Oracle Business Intelligence view object. This requirement also applies to the BICVO generated by Oracle Fusion tree management. However, of all the possible configurations of ADF Business Components objects defining a tree data source, only two configurations in particular actually lend themselves to the generation of declarative-mode BICVOs by Oracle Fusion tree management. These configurations have been formalized as two distinct design patterns: Design Pattern #1: Single data source view object, single data source entity object Design Pattern #2: Multiple data source view objects, unique data source view object per depth of tree, single data source entity object per data source view object Although either pattern can be used in the realization of either tree type, the first pattern is generally better suited to value-based trees, while the second pattern is more natural for level-based trees. However, the patterns are aimed primarily at supporting the automated generation of declarative-mode BICVOs, rather than supporting either particular type of tree. This pattern ensures that Oracle Fusion tree management is capable of generating a declarative-mode BICVO from an Oracle Applications Technology (ATG)-registered data source. Figure 59-3 illustrates the ADF Business Components object configuration defining declarative BICVO pattern #1. Figure 59-3 Declarative BICVO Based on Single Data Source View Object, Single Data Source Entity Object In this pattern, there is a single data source entity object and a single data source view object based on that entity object. The data source view object is a declarative-mode view object built by developers and registered with Oracle Fusion tree management. The data source entity object in turn is based on a _VL database view that joins the data source base table ( _B) with a table of translated values ( _TL). A second entity object is defined for the column-flattened table. Currently, the column-flattened table entity object must be created manually and made known to the generated BICVO via a manual workaround. Additionally, a collection of entity object associations, each joining the column-flattened entity object with the data source entity object for a unique level or depth of the tree, must also be created manually. If the application design requires that the base data source table expose multiple entity objects for any reason, then a _VL database view must be defined to join the multiple entity objects (possibly along with any translated attribute values), and that _VL database view must support the single data source entity object. Once the data source view object is registered with Oracle Fusion tree management as part of the tree structure definition process, and the required manually-created objects are all in place, a declarative BICVO may then be generated by Oracle Fusion tree management. This declarative-mode BICVO pattern is well-suited for value-based trees, since value-based trees are most often represented at the data source level by a single table with a recursive self-join. However, there is nothing about the pattern that strictly requires its use in value-based hierarchies, nor prohibits its use from other types of hierarchies (such as level-based or hybrid). The primary objective of this pattern is to facilitate the automatic generation of a declarative-mode BICVO from an ATG-registered tree. This pattern ensures that Oracle Fusion tree management is capable of generating a declarative-mode BICVO from an ATG-registered tree. Figure 59-4 illustrates the ADF Business Components object configuration defining declarative BICVO Pattern #2. Figure 59-4 Declarative BICVO Based on Multiple Data Source View Objects, Unique Data Source View Object per Level, Single Data Source Entity Object per Data Source View Object In this pattern, there are multiple data source view objects, with a unique data source view object representing each level or depth of the tree. Each data source view object is based on a single, unique data source entity object. Each data source view object is a declarative-mode view object built by developers and registered with Oracle Fusion tree management. All of the data source view objects must be declarative-mode view objects; otherwise, a declarative-mode BICVO can not be generated. As with the previous pattern, each data source entity object in turn is based on a _VL database view that joins some data source base table ( _B) with a table of translated values ( _TL). While multiple _VL database views are represented in the diagram, there is no hard-and-fast requirement that each data source entity object actually be built on top of a unique _VL database view. The diagram simply admits the possibility of multiple such views, presumably one per level or depth of the tree. The same as with design pattern #1, an entity object is also defined for the column-flattened table, and must also be created manually, and is made known to the generated BICVO via a manual workaround. This column-flattened table entity object is also joined to the data source entity objects via a collection of entity object associations. However, each entity object association relates the column-flattened table entity object to a unique data source entity object representing a particular level or depth of the tree. If the application design requires that the base data source table expose multiple entity objects per tree level or depth, then a _VL database view must be defined to join the multiple entity objects (possibly along with any translated attribute values) at that tree level or depth, and that _VL database view must support the single data source entity object for that tree level or depth. Once the data source view objects have been registered with Oracle Fusion tree management as part of the tree structure definition process, and the required manually-created objects have all been put in place, a declarative BICVO may be generated by Oracle Fusion tree management. This declarative-mode BICVO pattern is well-suited for level-based trees, since level-based trees are often built on top of multiple data sources, with a unique data source per level. However, there is nothing about the pattern that strictly requires its use in level-based hierarchies, nor prohibits its use from other types of hierarchies (such as value-based or hybrid). The primary objective of this pattern is to facilitate the automatic generation of a declarative-mode BICVO from an ATG-registered tree. In order to ensure correct SQL pruning, you must set the property values of the generated declarative-mode BICVO as follows: Designate the column-flattened table entity object as the primary entity of the BICVO. Designate the data-source entity objects as secondary or reference entities of the BICVO. Do not mark primary key attributes of the data source entity objects as primary-key attributes in the resulting column-set. (These are exposed by the generated BICVO). Set the selectedInQuery property of any non-primary key attribute of the generated BICVO to false. The Oracle Fusion Applications team that owns the tree is responsible for creating a custom tree node (parent-child relationship) table that is structurally equivalent to FND_TREE_NODE. Once the tree node table has been created, it is registered with ATG via the Oracle Fusion tree management tree creation UI. The Oracle Fusion Applications team is also responsible for creating a custom column-flattened table that is structurally equivalent to FND_TREE_NODE_CF. This custom table is depicted in Figure 59-3. Once created, it is also registered with Oracle Fusion tree management as the column-flattened table associated with the tree in the Oracle Fusion tree management creation UI. The Oracle Fusion Applications team must then create both data source view objects and associated data source entity objects, according to either of the structural patterns illustrated in Figure 59-3 and Figure 59-4. As with the tree node and column-flattened tables, the data source view object is also registered with Oracle Fusion tree management, via the Oracle Fusion tree creation UI. During the registration process, the developer may specify a custom property on any of the data source columns, indicating to Oracle Fusion tree management that these columns are relevant to Oracle Business Intelligence and need to be exposed at each level within the BICVO. This collection of Oracle Business Intelligence attributes is represented by the set of view attributes attached to the data source view object. As a result, the generated BICVO will join these columns in from the data source entity object at each level of the tree, immediately following the level-specific data source foreign key references; that is, the sequence of DEP*_PK* columns are followed by a set of columns representing each of the BI-relevant attributes. In addition to view attributes representing the Oracle Business Intelligence-relevant columns of the data source, the data source view object may also be configured with one or more view criteria filters. In particular, a view criteria must be defined to enforce data security if there's a requirement for data security at the source level. Any other relevant filters required by reporting may also be specified and attached to the data source view object. Each of these view criteria must specify a logical AND condition as its connective to other defined view criteria. Next, using the Oracle Fusion trees creation UI, the developer automatically generates the BICVO; that is, the column-flattened BICVO based on the column-flattened table. In Figure 59-3 and Figure 59-4, dashed lines represent joins on the underlying entities that are automatically added by Oracle Fusion tree management to the BICVO definition at runtime. These joins are inferred by ATG internal generation logic via inspection of the data source view object and its attendant view attributes and view criteria, as well as inspection of the registered column-flattened table. The BICVO, as generated by Oracle Fusion tree management, also includes a placeholder view criteria that is otherwise empty and specifies a logical OR condition as its connective to any other view criteria that might be defined as part of the BICVO. This placeholder view criteria is defined for data security purposes, and at the current time, simply directs ATG logic to invoke the data security view criteria defined on the data source view object. Note:There may also be a requirement to supply Oracle Data Integrator (ODI) with translations via a view object that is separate from the base data source table or _VLdatabase view. In this case, you must develop a view object and entity object pair that directly goes against the translations table ( _TL). You must take this entire collection of ADF Business Components objects, both hand-crafted and generated alike, and package them for deployment as part of an appropriate application module. Note that any ATG-generated artifacts, such as the BICVO, is generated to reside within the Oracle Fusion Middleware Extensions for Applications package namespace, which is: oracle.apps.fnd.bi.applcore.trees.bi.model.view Most of the Oracle Business Intelligence view objects and other artifacts are packaged under the Oracle Business Intelligence analytics namespace, which is: oracle.apps.<LBATop>.<LBACore>.publicview.analytics However, the Oracle Fusion Middleware Extensions for Applications package namespace is acceptable for ATG-generated objects seeing as they are artifacts of the ATG-Oracle Fusion Middleware Extensions for Applications services infrastructure. As long as the interfaces of these objects are publicly visible, this should not present any problems to clients of these objects. It is possible for an inconsistency to arise between the three realizations of a particular application hierarchy across the application, and the Transactional Business Intelligence and Oracle BI Applications technologies. Hierarchies on the Oracle BI EE server are necessarily limited to a maximum of 15 levels. However, Oracle BI Applications uses data warehouse tables to represent these hierarchies, and although the tables are not inherently bounded in size, restrictions on the number of levels of a given hierarchy being imported into the data warehouse are enforced by the ETL process. The majority of Oracle BI Applications hierarchies are fixed at eight levels plus a top-level for a total of nine fixed levels. A very small number of Oracle BI Applications hierarchies have greater than eight levels, plus a top-level and a base-level, with the largest of these hierarchies consisting of 21 fixed levels. Trees, especially value-based trees, are generally unbounded in size. However, trees that have been implemented using Oracle Fusion tree management services are limited to 32 levels by the ATG infrastructure. Problems can potentially arise when an application tree exceeds 15 levels. When this occurs, the corresponding Oracle BI EE representation of the tree, such as a Oracle Business Intelligence hierarchy stored within the repository (RPD), must be compressed to 15 levels. This is accomplished by retaining the leaf-level of the source tree (base-level), as well as the root-level (top-level), and pruning the tree starting with the base-1 level and working up the tree until enough levels have been removed. Table 59-6 illustrates the general mapping of levels on the Oracle Business Intelligence hierarchy to levels or depths of application trees. The logical representation of the application tree is expressed in terms of the columns of the column-flattened Oracle Business Intelligence view object for that tree, which has a maximum of 32 levels. In this case the 15-level (maximum) Oracle Business Intelligence RPD representation of the hierarchy is mapped to the 32-level (maximum) application BICVO representation of the tree by pruning the levels of the source tree designated by columns C1 through c17 of the BICVO. When mapping application trees to Oracle Business Intelligence hierarchies, there are two types of problems that may arise: The application tree exceeds 15 levels and the Transactional Business Intelligence realization of the hierarchy (provided by the Oracle BI EE server) has been pruned to 15 levels. However, the Oracle BI Applications realization of the hierarchy (provided by the ETL process) is allowed to exceed 15 levels. In this case, the Transactional Business Intelligence and Oracle BI Applications realizations of the hierarchy have different resolutions at their lowest levels. The application tree exceeds 15 levels and the Transactional Business Intelligence and Oracle BI Applications realizations of the hierarchy are both pruned to 15 levels. In this case, Transactional Business Intelligence and Oracle BI Applications are the same in terms of resolution, but the Oracle Business Intelligence side and the application side are not the same. For example, the application tree has greater resolution than its Oracle Business Intelligence counterpart. The following are two possible consequences that may result from the problems outlined: Loss of information (loss of resolution) resulting from the pruning away of several lower levels of the hierarchy, as well as potential differences in information (resolution) between Transactional Business Intelligence and Oracle BI Applications. An effect on fact-based security at the pruned levels. For example, you have established certain privileges on facts joined to nodes at tree levels that are ultimately pruned away. The security privileges of facts that had been joined to the pruned nodes may have been more restrictive than those at ancestral levels. There are basically two choices for either completely resolving, or at least mitigating, the potential problems. Note:Neither of the following resolutions require any actual implementation work. However, they do require a combination of policy and documentation. Complete Resolution: Any application tree that has a realization on the Oracle Business Intelligence side (Transactional Business Intelligence or Oracle BI Applications) must be restricted to no more than 15 levels. Mitigation: Ensure that, if any application tree exceeds 15 levels, and that tree has realizations on both Transactional Business Intelligence and Oracle BI Applications, that both technologies maintain pruned realizations of this tree and have the same number of levels (such as 15 or less). For this resolution, it will be necessary that these situations be investigated and documented on a case-by-case basis. You must decide how you want to adjust the security privileges of metrics that had previously been joined to the pruned levels, and then revise your Oracle Business Intelligence models accordingly. Data security privileges are effectively applied to the column-flattened representation of the tree (as described in Table 59-3) in the form of a filter based on an OR condition on the columns. For example, a reporting client has viewing privileges on nodes 1.1 and 1.2.2. This means that any row that contains either node in any of its columns (at any level in the tree) is viewable to the client, but the other rows are not. The viewable rows are shown in Table 59-7 in bold. If the use of the DescendantOf hierarchical referencing operator is also available, enabling the display of rows that contain either 1.1, 1.2.2, or any descendant of either of these two nodes, then the viewable rows include the rows that are displayed in bold in Table 59-8. Note:The generalized ORfilter can be restricted. For example, to apply only to the C0 column. This ensures that only nodes and optionally their descendants, for which a client has sufficient privileges, are viewable from the column-flattened result set. The base table view object and column-flattened view objects (BICVO) are separate view objects. However, the data security definition must be consistently applied to both the base table view object and BICVO. For example, BICVOs must not have different data security behavior than the base entity on which security has been defined by Oracle Fusion Applications. This is achieved by Fusion tree management using the following process: When Oracle Fusion tree management generates the BICVO, it automatically adds an Oracle Fusion data security view criteria to the BICVO ( FNDDS__BICVO). You must not change this view criteria's name but must ensure that it is enabled for the application module for the Transactional Business Intelligence. The view criteria predicate for BICVO is generated from the base table view object at runtime by Oracle Fusion tree management. This ensures that BICVO data security is in sync with the base object. The following restrictions are placed on the base object view criteria so that the base view criteria is mapped to the BICVO view criteria, (which may have different column names), at runtime: The base object view criteria must only use "Filter", which stores predicates using metadata. It cannot use SQL. The base object view criteria must only use the DescendantOf hierarchy operator. It must not use any other hierarchy operators. There may be situations in which a tree must support both secured and unsecured access. In this case, the BICVO that exposes the tree structure is deployed as both secured and unsecured versions. The generated BICVO already has a security mechanism associated with it that is based on its data source view object. An unsecured version of the BICVO can be created by manually making a copy of the generated BICVO and editing it to exclude sensitive columns. Then, secured access to this edited BICVO is turned-off by de-activating the dummy FNDDS__BICVO view criteria associated with the BICVO. This causes the data source security view criteria to not be enforced. Again, both the secured and unsecured versions of the BICVO for the tree are to be deployed together in the same application module. You must do the following to allow the Flexfields ADF Modeler to generate a flattened view object containing only those attributes marked as BI Enabled: Set the BIEnabledFlag for your key flexfield Set the BIEnabledFlag for your descriptive flexfield Create Flexfield business components Define custom properties on the Oracle Business Intelligence application module For information on how to perform these tasks, see the following sections in this book: Section 22.12, "Preparing Descriptive Flexfield Business Components for Oracle Business Intelligence" Section 23.4.3, "How to Prepare Key Flexfield Business Components for Oracle Business Intelligence" To properly resolve meanings for set-enabled attributes, the setID attribute must be exposed to the Oracle Business Intelligence layer. The setID attribute should be exposed using the appropriate method for the following reference types: Set-enabled lookups Set-enabled reference tables The setID is required to retrieve appropriate meaning if the lookup is set-enabled. The Set Assignments Query is required to retrieve the setID. To expose the setID attribute: Set-enabled lookups (shared and Transactional Business Intelligence) are registered as warehouse domains and the SetAssignment entity object is already provided by ATG. Build an entity object association between the Fact entity object and the SetAssignment entity object for each set-enabled lookup on the fact. Expose setID as an attribute on the FactVO for each set-enabled lookup type on the FactVO. The Lookup function is used to retrieve the translated meaning from the warehouse using setID parameter. The setID is stored on set-enabled reference tables. A Unique ID is used as the primary key of the reference table; ID and language form the unique key of the translated reference table. The determinant value is not stored on the reference table; the foreign key used to reference the table is stored on transaction tables. To expose the setID attribute: Because the foreign key to the reference table already exists on the transaction, meanings for set-enabled attributes should be resolved depending on usage. Transactional Business Intelligence only: Resolve meaning on the base view object using entity object association, bringing in the setID attribute. Warehouse domain: A separate view object is required. Build a view link from the base view object to the reference view object. The setID attribute exists on the reference table view object. Oracle Fusion Middleware Extensions for Applications provides special MLS Currency view objects for Oracle Business Intelligence. To support multi-currency, create view links from the primary entity currency code fields on transaction view objects to the new currency view object.
http://docs.oracle.com/cd/E25054_01/fusionapps.1111/e15524/adv_bi_vos.htm
CC-MAIN-2016-22
en
refinedweb
import random import time board = [0,1,2, 3,4,5, 6,7,8] def show(): print (board[0],"|",board[1],"|",board[2]) print ("----------") print (board[3],"|",board[4],"|",board[5]) print ("----------") print (board[6],"|",board[7],"|",board[8]) def checkLine(char, spot1, spot2, spot3): if board[spot1] == char and board[spot2] == char and board[spot3] ==char: return True def checkAll(char): if checkLine(char, 0, 1, 2): return True if checkLine(char, 1, 4, 7): return True if checkLine(char, 2, 5, 8): return True if checkLine(char, 6, 7, 8): return True if checkLine(char, 3, 4, 5): return True if checkLine(char, 1, 2, 3): return True if checkLine(char, 2, 4, 6): return True if checkLine(char, 0, 4, 8): return True while True: answer = input("Select a spot, any number 0 through 8:") answer = int(answer) time.sleep(1) if board[answer] != 'x' and board[answer] !='o': board[answer] = 'x' if checkAll('x') == True: print ("You win! Yay!") break; while True: random.seed() opponent = random.randint(0,8) if board[opponent] != 'o' and board[opponent] != 'x': board[opponent] = 'o' if checkAll('o') == True: print ("You loose! That's to bad.") break; break; else: print ("This spot is taken!") show() Restart my program?Page 1 of 1 1 Replies - 1293 Views - Last Post: 29 September 2012 - 05:23 PM #1 Restart my program? Posted 29 September 2012 - 12:50 PM So, I've been programming a game for a while now... I've tried various types of codes for restarting my program.. Right now, I lost the code that helped me ask the person if they wanted to play again, but is there any way for me to program the code to make it ask you if you would like to play again? If so, thanks! Replies To: Restart my program? #2 Re: Restart my program? Posted 29 September 2012 - 05:23 PM Take all the code your currently have in while True and put it in a function called play. Then have another function like: Honestly, your big problem is you don't know when to finish a game. Your play could look something like: Note, global variable board is bad. Also, you need to reset it if you play again, anyway. Also, Hope this helps. def main(): random.seed() # only call this once while True: play() # ask if they want to play again # if no, break print("Thanks for playing") main() Honestly, your big problem is you don't know when to finish a game. Your play could look something like: def play(): board = [ str(i) for i in range(9) ] available = 9 while True: show(board) playerTurn(board) avaiable -= 1 if checkAll(board, 'x'): print ("You win! Yay!") break if available>0: computerTurn(board, available) avaiable -= 1 if checkAll(board, 'o'): print ("You loose! That's to bad.") break if available<1: print ("Tie.") break show(board) Note, global variable board is bad. Also, you need to reset it if you play again, anyway. Also, Hope this helps. This post has been edited by baavgai: 29 September 2012 - 05:23 PM Page 1 of 1
http://www.dreamincode.net/forums/topic/293644-restart-my-program/page__pid__1712080__st__0
CC-MAIN-2016-22
en
refinedweb
Answered by: how to return null from a constructor say, i a m trying to do something like: class something something(streamreader){ if (wrong format or end of file) caller receives null as result else build object from what has been read in file } i tried 'return null', but it says it can not do that since a constructor is 'void', so i can not place a return sentence. i'm not sure something like this.dispose would retunr a null, since the 'dispose' may be asynchronous (or something) i have the next alternative class something blah blah somehting(string) (build object from string, check if string==null or wrong format before calling) static bool checkformat(string) check to see if string is adecuate to build something object but if i use this, that would mean that i have to parse the string twice, onece to check format and once to build it, so im not too in favour of it, how do i return null from a cosntructor if params are not adecuate?Thursday, November 30, 2006 9:07 PM Question Answers - All replies a constructor doesnt return anything. It's only job in life is to construct the object. That's all. Nothing else. you can only return a value if the method has a return type (non void). also you can return null from string return types or other objects, except for say, integer valuesThursday, November 30, 2006 9:16 PM - - hmm.. i think i didnt make myself clear.. a constructor, whether is called a constructor or not, is a function, it returns something, actually a pointer to some object of the type of the class it was declared. a variable declared as some kind of object, or whatever inherits form 'object' class, can point to that type of object or have a null value, now, if i call a function and that function returns null for an object, thats a valid value. i am asking how to do that in a constructor. for example, if i was talking about 'plain c', and i made an alloc or malloc or whatever, it may return the pointer to the allocated space or it may return a null if it was not possible to allocate the space. i want to return the object if the object was created succesfully or null if the parameters to create the objet in the cosntructor function, which is an ordinary function in most senses, stated to return an object of some class, are not adecuateThursday, November 30, 2006 9:30 PM - it seems thats what i must do, but as stated above, i was trying to avoid that, since i would have to check for validity twice. im not happy about it toughThursday, November 30, 2006 9:32 PM - What about making your constructor throw an exception when it isn't created properly, and then use try and catch statements in the function that's calling the constructor?Friday, December 01, 2006 4:23 AM - it wouldnt be recommended and its bad practice. you should really validate your inputs/objects before creating the class objectFriday, December 01, 2006 12:38 PM Also note that it is NOT the constructor that is returning the constructed object, but the new operator. I'm curious - why is it bad practice to throw an exception in a constructor? I know this is frowned upon in C++ - is the same true for C#, and why?Friday, December 01, 2006 1:40 PM well because exceptions are expensive and because it's the way the design pattern is really. Maybe the same reason for C/C++? (I dont dev in C/C++) It's just not recommended and I've not seen any classes that throw an exception via the constructor, except for maybe invalid inputs but even then its recommended to check/validate your inputs before constructing an object - its better design.Friday, December 01, 2006 2:07 PM It isn't bad practice to throw an exception from a constructor. If the inputs are invalid or the object cannot be constructed then you should throw exceptions as with any other member. You have to write your class with no control over how it will be called (unless it is an internal class) so the only defence you have against invalid inputs is exceptions.Friday, December 01, 2006 2:58 PM Note, it is not a bad practiceto throw an exception is a ctor (either in C# or C++). It isa bad practice to allow an exception to leave an object in a "half-constructed" state. For example (in C++): class MyClass { char* ptr1; char* ptr2; public: MyClass() { ptr1 = new char[100]; ptr2 = new char[10000]; } } In that example, if the new for ptr2 throw an out of memory exception, the memory of ptr1 would be lost as a memory leak. The ctor should be written as: MyClass() { ptr1 = NULL; ptr2 = NULL; try { ptr1 = new char[100]; ptr2 = new char[10000]; } catch(...) { delete[] ptr1; delete[] ptr2; throw; } }Friday, December 01, 2006 4:13 PM - Ahmedilyas: "it wouldnt be recommended and its bad practice. you should really validate your inputs/objects before creating the class object " No mate. Validating a function's (or constructor's) arguments outside the function itself is the bad practice. If the function's interface changes even a tiny bit, you have a lot of search/replacing to do or you'll be throwing exceptions anyways. And arguments about exceptions are way more expensive than the exceptions themselves. Since they're only supposed to be used in *exceptional* circumstances, they should never slow anything down in normal use. Especially in a constructor, where there's just been a slow memory allocation. Bad practice is obfuscating and duplicating code to avoid throwing exceptions.Friday, December 01, 2006 10:55 PM ah well we have our own views but I still wouldnt recommend it. Always been told throwing an exception in a Constructor is not the way to go about doing things and should be validated before hand before constructing an object. as long as the original question is resolved...Friday, December 01, 2006 10:57 PM - I don't think the original question has been resolved. The OP is talking about "validating twice." This is exactly why I say not throwing exceptions from a constructor is bad practice. As far as I can see, there are only two proper ways to safely handle object initialization: Friday, December 01, 2006 11:05 PM - Throw exceptions from the constructor - Don't use exceptions, use empty constructors, and have an Init() method with a return value to indicate success or failure I'm glad some of you agree that throwing exceptions from a constructor is not necessarily bad practice, as I tend to do this, e.g., in the input arguments are invalid. I think it is one of those rules that has been handed down and got garbled in the process. Throwing an exception from a constructor is bad IF the object is left half constructed. In C++ this could be a real problem where you have to manage all the memory yourself. So the rule has warped into "do not throw exceptions from a constructor". In C# this less likely to be a problem, and an exception is often the best way to report an error when creating an object, as long as the application state remains consistent before and after the exception.Monday, December 04, 2006 1:32 PM You could have the constructor create an empty object and then have an Init method to process any actual data and fill the object. The Init method could return NULL or an error code that would give the caller more information. Hell, you could even add a GetError method to give detailed error information. This way you never have an object that's half constructed, it would simply be set to default values. Any calls to the empty object could fail safely. I agree that throwing an exception in a constructor is not the way to go. You would be simply setting yourself up for future heartbreak.Monday, December 04, 2006 8:37 PM - There are MANY classes in the .Net framework that throw exceptions, so it is totally untrue to say that it is not recommended practice! Examples include: DateTime List<T> Queue<T> String() Encoding() ...and many, many more. In fact, it's difficult to find any .Net constructor that takes a parameter that DOESN'T throw an exception!Tuesday, December 05, 2006 1:13 PM Although I still don't see a problem with throwing an execption from a constructor, it seems that a few people still don't agree. So how's this for a compromise: Instead of throwing an exception or using a separate Init() function, why not declare the constructor like this: public Something(string szInput, out bool errorOccurred) Then if there's a problem with the input, or if there isn't enough memory for the object, the calling method will know by checking the value of errorOccurred. Eh?Tuesday, December 05, 2006 11:59 PM - Since parameter-accepting constructors for most .Net classes can throw exceptions, I think people just have to suck it up and accept that constructors will throw exceptions. I'm not sure where the idea came from that it's bad for a constructor to throw an exception. Could it be that people are confusing it with the fact that C++ destructors should never throw an exception? Anyway, people MUST TAKE NOTE that something as innocent looking as: using (FileStream myStream = new FileStream("myFile", FileMode.OpenOrCreate)) ... can throw no less than NINE different kinds of exceptions. To be honest, it is somewhat worrying that there can be so much debate about exceptions in constructors. Are people writing code that blithely ignores the exceptions that can be thrown from all the .Net types?Wednesday, December 06, 2006 9:46 AM - This is the most obvious solution except the Create method should return a Somehting object instead of void.Wednesday, December 06, 2006 3:33 PM Now you know why so much software is so buggy. People don't want to handle errors - a) it's hard to do well and b) it disrupts the flow of what might otherwise be clean and easy to read code. Unfortunately, it's necessary and I'm with you.. you gotta do what you gotta do. But the answer is yes.. I think alot of people DO blithely ignore the possible exceptions. I have the luck of working with people who love breaking software though, so the stuff I write usually gets a good pounding. While it's impossible to create bug-free software, I think I at least get the opportunity to create very reliable software..Friday, December 08, 2006 5:37 PM A couple of the posts in this discussion have highlighted the reasons to prefer factory methods over constructors. I can't give a constructor a meaningful name, and I can only have the ctor return a single object of a specific type. Using a constructor is hard coding a dependency into an application - which will need to happen occasionally, but can also hurt testability and maintainability. It sounds like the original poster has a scenario where some complicated logic is required to construct the right object - and this is a place where I'd favor using a factory method over both a constructor and over any "Init" type / two step initialization technique. Two phase construction introduces more complications than throwing from inside a constructor would ever introduce. What happens if Init is called twice? What sort of checks do I need to place into every method to ensure Init was correctly invoked once and once only? Life is simpler if I know the object is created all at once or not at all.Friday, December 08, 2006 7:15 PM The best way I know to return null from constructor or to cancel a constructor is this: public class Person { public Person(int age, out Person result) { result = this; if (age > 120) { result = null; return; } // Continue with constructor... } } // Call constructor and return null if person is more than 120 years old. Person p; new Person(130, out p); // p is NullFriday, March 21, 2008 11:23 AM I don't think that returning a null from a constructor is a good programming practice. That appears to be valid code that returns valid objects, not nulls. It would require intimate knowledge of the code to realilze that a null was being returned. Works great if you have the source code for Person. Too bad if you don't RudedogFriday, March 21, 2008 6:00 PMModerator - An easier, and mucht more sensible way to deal with this is (if you really didn't want to throw an exception):Tuesday, March 25, 2008 3:15 PM - I agree with Scott, a factory method would be ideal for this situation. Having to create a Person object to create another Person object isn't right...Tuesday, March 25, 2008 3:31 PMModerator Everything I've read on the subject from Microsoft indicates that it is perfectly valid to throw exceptions in constructors, and that in fact exceptions should ALWAYS be used to report errors so that errors are always reported and dealt with consistently. Most of the suggestions posted here to avoid throwing an exception in a constructor (such as returning the object as an out parameter), actually violates the design guidelines established by Microsoft for .NET. The only time you really want to avoid throwing an exception is when you except it to happen in the normal flow of the program. That said, using factory methods is a perfectly reasonable approach when one needs to do something that can't be done with a constructor, such as the option of returning null instead of a valid object.Tuesday, March 25, 2008 5:38 PM Don't like the IsValid idea at all. You are given the illusion the object was created (because you've had one returned to you). It's way to easy to forget to call IsValid because, for starters, you have to notice that the class has an IsValid property. The pattern I've seen used with great success is: Have a C'Tor and a TryCreate(input, out objectCreated) method. If you use the C'Tor, be prepared for exceptions. If you use the trycreate, be prepared to get back false and have objectCreated be null.Thursday, May 01, 2008 11:47 PM - I think this whole discussion here have derailed. Exceptions are used for *exceptional* cases. Yes, they're more costly than simply an if-statement with a "return null" statement inside, but the thing is that the "return null" case will only happen as *an exception*, it should not be the general case. If you have to validate input from a file, like a CSV file full of data, then throwing exceptions on 50% of the data will slow down the import. On the other hand, if you *know* that the first column will be a valid Int32 valid, doing a Int32.Parse, which will throw an exception on bad input, is *exactly the right idea*. Constructors *should* throw exceptions on bad data, unless you can *guarantee* that the constructor is only called from controlled methods, like factory methods that is in the same library. You should *never* assume the caller has done the job required to validate the input. But again, as long as the invalid data are *exceptions* and not the rule, you won't have a problem with performance. The alternative is the odd bug where the wrong data slipping through to a class will cause unpredictable results later on. Without the fail-at-once mentality, you're going to use more time to figure out why a class fails at a particular point when in fact you could've known about the problem well in advance, and at the point where the wrong data was given to the class, which will typically give you a stack trace that shows the location that the real bug is: the code that is calling your constructor with wrong data. As for handling the problem, if all you do is return "I can't do this", you might compare this to a lightweight version of "throw new Exception();" where you simply don't tell the caller what exactly went wrong. This will make finding and fixing bugs in this code that much harder when all you know is that it "doesn't work". I'm sure most of you have heard that phrase from coworkers, support, customers, etc. and wondered how these people could work when they can't even read a simple error message back to you, but if you follow this "return null" way of doing things, you're ensuring this is all you'll ever know. I'm pretty sure the user of your software would feel much better too if the program told him that "The file you picked has the wrong file format" instead of just "I can't open that file", which, as was shown, could be from nine different exceptions (like ACL, file locked, network no longer there). Personally I advocate that all public methods should throw exceptions if the data that is invalid is exceptional, or there is no (known) way to process that particular type of data. If for no other reason you'll get a crash which tells you that "this particular combination of data was invalid according to the rules I know at the moment", and then you can go figure out if you need to extend your method to handle that case. Under no circumstances is it bad to throw exceptions from a constructor as *a general rule*. It might be that you want to prevent costly exceptions that will occur a lot because you're doing things that will often fail (like the import case), but in that case you will still use exceptions for the absolutely invalid data, and then pre-validate the data to avoid the throw at all. The exception, however, is the last defense for the class. If you drop that, all bets are off. This is my 2 cents, but if someone is going to challenge this with "throwing exceptions in constructors is bad practice" then I challenge you to come up with the source for this bad practice, otherwise its just an opinion.Sunday, May 04, 2008 11:30 AM - I am not good at English and I am programmer beginner. I looked for this information too and everywhere was, that constructor cant return value. What about this solution: class something { public: something() { if ( !{test validity} ) { delete(this); (something*)this = NULL; } } } But I dont know if it is correct way (all allocated memory from class will be destroyed etc...). Next problem is that I am deallocating memory in which program running... So write your opinion if I can solute this problem by this way. Monday, September 08, 2008 12:57 PM - "I am not good at English and I am programmer beginner. I looked for this information too and everywhere was, that constructor cant return value. What about this solution:" The this keyword is read only and you cannot assign to it. You cannot have a constructor that returns null. Your best bet is to create some static methods in your class that have the abilities of creating an object of your type based on your requirement : Enjoy Agility., September 08, 2008 3:23 PM - Lasse V. Karlsen said: I think this whole discussion here have derailed. Anyone who'd like to add to this thread, could you please scroll up to Lasse's statement on May 4th. Sums it up very well. The constructor is no place for user input, or text file input validation. The constructor's job is to construct things and if it will be handed garbage data regularly, it should throw(up) an exception regularly. By the way, learn "Design Patterns", the Object Factory pattern is classic and should be used where it fits. It may apply in this case, but not as a way to get around constructors not returning null. IMHO... Les Potter, Xalnix Corporation, Yet Another C# BlogMonday, September 08, 2008 4:21 PM - A long time ago in a galaxy far away... There was cpp, where you could explicitly define operator new, which was calling malloc(). It was easy to return NULL calling new operator. It was very interesting feature. It was used to create very complicated programs by almost very complicated programmers. These programs were debugged by very hard bugslayers. Right than very hard bugslayyers had beaten complicated programmers.Wednesday, July 29, 2009 1:31 PM You can not return null in the constructor but you can overload the == operator and make the result look like it's null. Following code works fine: (You don't need to override GetHashCode and Equals. But compiler throws warning if you don't) Person p = new Person(150); if (p == null) { MessageBox.Show("Yes it's null!"); } And this is the Person class: public class Person { public Person(int age) { if (age > 120) { isNull = true; } } private bool isNull; public static bool operator ==(Person personA, Person personB) { if ((object)personA != null && personA.isNull) { personA = null; } if ((object)personB != null && personB.isNull) { personB = null; } return object.Equals(personA, personB); } public static bool operator !=(Person a, Person b) { return !(a == b); } public override int GetHashCode() { if (isNull) { return 0; } return base.GetHashCode(); } public override bool Equals(object obj) { if (obj is Person) { if (((Person)obj).isNull) { obj = null; } } return base.Equals(obj); } }Wednesday, January 13, 2010 9:17 PM - If possible, use the Null-object pattern. Don't return a null if at all possible, because you start checking for NULLs everywhere. As an example, in the "build object from what has been read in file", have your null object return an empty list (or empty StreamReader). If the problem is (contrived example) that your streamreader is supposed to load the contents of a file, and that file doesn't exist, throw an exception (Even if its in the constructor). Validate your inputs, throw exceptions, and handle those exceptions where necessary.Wednesday, January 13, 2010 11:34? 1. You could have a Parameter on the Factory Method that tells you why but: a) you'd need to declare a separate Variable to hold the results of that Parameter and b) to follow good practices, do so in every place you're trying to create the Object vs. sharing a Global one. 2. You could Throw a custom Exception but that would require use of the Try Statement, which: a) is cumbersome, b) IMHO, should be reserved for unexpected errors where you just want to display and / or log an error(s) and exit vs. recover and continue processing and c) cannot be enforced to exist (via a Base Class or a more generic Interface) to exist like a Public Variable / Property of the Class. 3. You could have a Shared validation Method but: a) it would have the same disadvantage as option 1 above, plus b) it would have to duplicate the Parameters needed by the Constructor and the appreciating human cost of that duplication is usually much more significant than the depreciating machine costs of Constructing an Object only to Destruct it shortly after. For commonly expected errors (like the User typed in an invalid value(s) needed to create the Object), it's much easier to just check one or more Public Variable(s) / Property(ies) of the ClassFriday, June 29, 2012 6:30? ................................. For commonly expected errors (like the User typed in an invalid value(s) needed to create the Object), it's much easier to just check one or more Public Variable(s) / Property(ies) of the Class For commonly expected errors, why not validate the parameters prior to using them? If it is possible to determine the reason "why" after the fact, shouldn't you be able to determine the reason before the fact? I have a previous post on this thread. You cite the fact that try/catch is undesirable, which I agree with. Throwing an exception would be deceptive and misleading. A Factory Method seems to be the only best choice. The are examples of this in the Base Class Library; i.e. Delegate.CreateDelegate. Rudy =8^D Mark the best replies as answers. "Fooling computers since 1971.", June 30, 2012 10:50 AMModerator For commonly expected errors (like the User typed in an invalid value(s) needed to create the Object), it's much easier to just check one or more Public Variable(s) / Property(ies) of the Class It would be even easier to forget to check those public variables, and to start using the object as if it had been constructed correctly. Personally, I think your suggestion that the class should validate the user input is suboptimal. It should be the responsibility of a business logic class to do that kind of verification, and then the verified data should be passed to the constructor of the object that requires the input data to be correct. Any user of the "verified" class can be sure that it is in a usable state. If you allow a class to have a property that tells you if it was properly constructed or not, then *every* method that is passed such an object will need to check that property. Not only that, such methods will need to decide what to do if the object is NOT properly constructed. That would not be a great design... Monday, July 02, 2012 7:52 AM - Edited by Matthew Watson Monday, July 02, 2012 8:21 AM .Thursday, July 05, 2012 5:08 PM . Thanks for the compliment. Under the scenario you describe above, I would not call those circumstances "unexpected errors". I would call them "bugs". A Factory Method should return null or a valid object. In fact, you do not even need to throw an exception. I say let the OS do it for you because that is most likely what will happen anyway. If your Factory Method has the potential to throw exceptions, then it should catch them and return a null object. If the code is for your own internal use, then proceed by whatever means seems most logical and appropriate. I'm just saying that I would be livid if a constructor threw an exception on me. To me, that's a bug. Rudy =8^D Mark the best replies as answers. "Fooling computers since 1971." Thursday, July 05, 2012 5:24 PMModerator - Edited by Rudedog2Moderator Thursday, July 05, 2012 5:28 PM @Rudedog2: You're welcome. Re. "I would call them "bugs".": I agree. That's precisely why I think they should generate Exceptions vs. just returning a Null and hoping that all instances of Calling Code will properly suspend / abort processing when Null is returned. Re. "In fact, you do not even need to throw an exception.": When I said "Throw an Exception", I meant indirectly (by allowing a Runtime Error to occur un-Catch'ed) or indirectly (via an explicit Throw Statement) both from inside the Constructor. Re. "I would be livid if a constructor threw an exception on me.": Welll, Class Constructors in the .NET Framework do that all the time (albeit for what I would call "unexpected" errors / "bugs" in the Calling Code). Ex. "String(Char, Int32)", "List(Of T)(Int32)", "List(Of T)(IEnumerable(Of T))". Sent from my iMindThursday, July 05, 2012 6:56 PM I didn't look up all of the constructors that you listed, but the String constructor throws an IndexOutOfRange Exception, which is documented. Remember, the original question is how to return null from a set of invalid parameters. The recommendation is to use a Factory Method, not a constructor, per se. Also, using a negative integer to index an array should throw an exception, and it is something that the consumer should catch. The original question was how to throw the exception and return something, preferably a null. I would bet that the other types you cited throw exceptions for invalid parameters, not unexpected or unforeseen issues. [EDIT] List<T> throws an exception if the parameter is null. Again, another developer error that has been foreseen. Not the scenario posed by the original question. Rudy =8^D Mark the best replies as answers. "Fooling computers since 1971.", July 05, 2012 9:41 PMModerator Re. "the String constructor throws an IndexOutOfRange Exception": Huh? The String Constructor example I listed was "String(Char, Int32)" which, according to the MSDN docs, Throws an "ArgumentOutOfRangeException" (not "IndexOutOfRange") Exception (btw, when "count is less than zero" which the Parameter's Type of "Integer" does not prevent). Re. "which is documented": Huh? I never claimed it wasn't. If you're just trying to imply that being "documented" means it's not an "unexpected" error, then see my 3rd "Re." after this one. Re. "Remember, the original question is how to return null from a set of invalid parameters. The recommendation is to use a Factory Method, not a constructor, per se." and "The original question was how to throw the exception and return something, preferably a null.": Huh? The O.P. asks at the end of his post "how do i return null from a cosntructor if params are not adecuate?". The answer to the O.P.'s literal Q. is "No, you can't do it". Now the *closest* *workaround* to what he's asking for is to force use of a Factory Method (which returns Null on errors) vs. a "cosntructor". Regardless of whether one uses a *theoretical* Constructor that *could* return Null or a Factory Method that *can* to Construct an Object, what I (and others including yourself) have been trying to also point out is that it's bad practice to rely on it doing so *if* it's doing so due to errors that the App was supposedly designed to catch (i.e. via "Business Logic") before Object Construction. In addition to that, I (and I think no one else in this Thread) am also trying to point out that *if* the errors were designed to be caught before Object Construction, then the Class should not *quietly* announce the errors by simply returning Null, but *instead / also* *loudly* announce them by (directly / indirectly) Throwing Exceptions from inside the Constructing Method (New or Factory) so that it's much less likely the Consumer will continue processing (by simply avoiding References to the Null Object because I think the code to do so is much less likely to be well designed / tested code which therefore is much more likely to result in invalid processing / corrupted data without generating other error messages and / or doing so later in the process when it's harder to trace the source and / or recover from the damages) vs. aborting. BTW, in the O.P.'s specific example where the Class' Constructor is checking for "wrong format or end of file (?aka empty file?)" prior to returning an Object that represents the File's contents, I don't think his App was designed to check for those errors prior to Object Construction, nor do I think it should've been. I think that in that specific example: a) the validation logic should be encapsulated inside his Class which would make the errors they catch "expected" errors at the time of Object Construction and b) *if* his Consumer code wants to know why the Construction failed, then I recommend the Object return the error(s) via Public Variable(s) / Property(ies) or Optional Parameter(s) on the Constructing Method (New or Factory). Re. "Also, using a negative integer to index an array should throw an exception, and it is something that the consumer should catch.": Did you mean "should catch" with a Try - Catch Statement around the Constructor Call or (as several of us including yourself have recommended above) before even calling the Constructor? Re. "I would bet that the other types you cited throw exceptions for invalid parameters, not unexpected or unforeseen issues.": As I was trying to point out in my 2nd reply above, whether an error is considered "unexpected" / "unforeseen" / "bug" (in the Consumer) at the point of Object Construction depends on whether the Consumer was designed to catch those errors prior to Object Construction. If the latter was, then the former is. As for the .NET Classes, since they're Throwing Exceptions vs. returning errors from their Constructors and we can't change them, then I think the best practice would be to validate the Parameters prior to calling a .NET Class Constructor which would make the errors "unexpected" inside the .NET Constructors such that if somehow my validation logic failed, I'd want errors due to it to be *loudly* announced by the Constructor via Thrown Exceptions and either: a) just generate a Runtime Error and abort the App or b) be Catch'ed via a Try Statement but only for display / logging purposes before ultimately aborting the App. Sent from my iMindFriday, July 06, 2012 12:27 AM I'm just saying that I would be livid if a constructor threw an exception on me. To me, that's a bug. Sure it's a bug - in your own code, not in the constructor throwing the exception. :) How do you deal with the myriad of .Net types that throw exceptions from their constructors? * DateTime - if you pass in an illegal year/month/day combination * FileStream - if you try to open a non-existent file. * Hundreds of other classes - if you pass in null for parameters that are not allowed to be null. There are so very many - how do you deal with it? To me it is pretty clear that if you commit a programming error by passing bad data to a constructor, you should get an exception.I would like to point you at Microsoft's documentation for Constructor Design. In particular, note the comment: Do throw exceptions from instance constructors if appropriate. Friday, July 06, 2012 8:16 AM - Edited by Matthew Watson Friday, July 06, 2012 8:19 AM
https://social.msdn.microsoft.com/Forums/en-US/1450ed84-277f-46d3-b2ea-8352986f877c/how-to-return-null-from-a-constructor?forum=csharplanguage
CC-MAIN-2016-22
en
refinedweb
#include <sys/types.h> #include <sys/stream.h> #include <sys/stropts.h> #include <sys/ddi.h> #include <sys/sunddi.h> int qassociate(queue_t *q, int instance Solaris DDI specific (Solaris DDI). Pointer to a queue(9S) structure. Either the read or write queue can be used. Driver instance number or -1. The qassociate() function must be used by DLPI style 2 device drivers to manage the association between STREAMS queues and device instances. The gld(7D) does this automatically on behalf of drivers based on it. It is recommended that the gld(7D) be used for network device drivers whenever possible.. The qassociate() function can be called from the stream's put(9E) entry point. Success. Failure. DLPI style 2 network driver DL_ATTACH_REQ code specifies:. dlpi(7P), gld(7D), open(9E), put(9E), ddi_no_info(9F), queue(9S)
http://docs.oracle.com/cd/E36784_01/html/E36886/qassociate-9f.html
CC-MAIN-2016-22
en
refinedweb
java.lang.Object org.springframework.security.acl.basic.BasicAclProviderorg.springframework.security.acl.basic.BasicAclProvider public class BasicAclProvider Retrieves access control lists (ACL) entries for domain object instances from a data access object (DAO). This implementation will provide ACL lookup services for any object that it can determine the AclObjectIdentity for by calling the obtainIdentity(Object) method. Subclasses can override this method if they only want the BasicAclProvider responding to particular domain object instances. BasicAclProvider will walk an inheritance hierarchy if a BasicAclEntry returned by the DAO indicates it has a parent. NB: inheritance occurs at a domain instance object level. It does not occur at an ACL recipient level. This means all BasicAclEntrys for a given domain instance object must have the same parent identity, or all BasicAclEntrys must have null as their parent identity. A cache should be used. This is provided by the BasicAclEntryCache. BasicAclProvider by default is setup to use the NullAclEntryCache, which performs no caching. To implement the getAcls(Object, Authentication) method, BasicAclProvider requires a EffectiveAclsResolver to be configured against it. By default the GrantedAuthorityEffectiveAclsResolver is used. public BasicAclProvider() public void afterPropertiesSet() afterPropertiesSetin interface InitializingBean public AclEntry[] getAcls(Object domainInstance) AclProvider Will never be called unless the AclProvider.supports(Object) method returned true. getAclsin interface AclProvider domainInstance- the instance for which ACL information is required (never null) nullif no ACLs apply to the specified domain instance public AclEntry[] getAcls(Object domainInstance, Authentication authentication) AclProvider Authenticationobject. Will never be called unless the AclProvider.supports(Object) method returned true. getAclsin interface AclProvider domainInstance- the instance for which ACL information is required (never null) authentication- the prncipal for which ACL information should be filtered (never null) null) if no such ACLs are found public BasicAclDao getBasicAclDao() public BasicAclEntryCache getBasicAclEntryCache() public Class getDefaultAclObjectIdentityClass() public EffectiveAclsResolver getEffectiveAclsResolver() public Class getRestrictSupportToClass() protected AclObjectIdentity obtainIdentity(Object domainInstance) AclObjectIdentityof a passed domain object instance. This implementation attempts to obtain the AclObjectIdentity via reflection inspection of the class for the AclObjectIdentityAware interface. If this fails, an attempt is made to construct a getDefaultAclObjectIdentityClass() object by passing the domain instance object into its constructor. domainInstance- the domain object instance (never null) nullif one could not be obtained public void setBasicAclDao(BasicAclDao basicAclDao) public void setBasicAclEntryCache(BasicAclEntryCache basicAclEntryCache) public void setDefaultAclObjectIdentityClass(Class defaultAclObjectIdentityClass) AclObjectIdentityclass that an attempt should be made to construct if the passed object does not implement AclObjectIdentityAware. NB: Any defaultAclObjectIdentityClassmust provide a public constructor that accepts an Object. Otherwise it is not possible for the BasicAclProvider to try to create the AclObjectIdentity instance at runtime. defaultAclObjectIdentityClass- public void setEffectiveAclsResolver(EffectiveAclsResolver effectiveAclsResolver) public void setRestrictSupportToClass(Class restrictSupportToClass) null, the supports(Object)method will only support the indicates class. This is useful if you wish to wire multiple BasicAclProviders in a list of AclProviderManager.providersbut only have particular instances respond to particular domain object types. restrictSupportToClass- the class to restrict this BasicAclProviderto service request for, or null(the default) if the BasicAclProvidershould respond to every class presented public boolean supports(Object domainInstance) An object will only be supported if it (i) is allowed to be supported as defined by the setRestrictSupportToClass(Class) method, and (ii) if an AclObjectIdentity is returned by obtainIdentity(Object) for that object. supportsin interface AclProvider domainInstance- the instance to check trueif this provider supports the passed object, falseotherwise
http://docs.spring.io/spring-security/site/docs/2.0.x/apidocs/org/springframework/security/acl/basic/BasicAclProvider.html
CC-MAIN-2016-22
en
refinedweb
Voice Interface and User Experience Testing for a Custom Skill Voice interface and user experience testing focuses on: - Testing the user experience to ensure that the skill is aligned with several key features of Alexa that help create a great experience for customers. Reviewing the intent schema, the set of sample utterances, and the list of values for any custom slot types you have defined to ensure that they are correct, complete, and adhere to voice design best practices. These components are defined on the Interaction Model page for your skill in the developer portal. These tests address the following goals: - Increase the different ways end users can phrase requests to your skill. - Evaluate the ease of speech recognition when using your skill (was Alexa able to recognize the right words?) - Improve language understanding (when Alexa recognizes the right words, did she understand what to do?). - Ensure that users can speak to Alexa naturally and spontaneously. - Ensure that Alexa understands most requests you make, within the context of a skill’s functionality. - Ensure that Alexa responds to users’ requests in an appropriate way, by either fulfilling them or explaining why she can’t. Many of these tests verify that your skill adheres to the design guidelines described in Alexa Voice Design Guide. You may want to review those guidelines while working through this section. For recommendations for sample utterances, see Best Practices for Sample Utterances and Custom Slot Type Values. Note that many of these tests require that you have a device for voice testing. If you do not have a device with Alexa, you can use third party Alexa-enabled services, such as Echosim.io, to test your Alexa skill. This document is oriented towards skills that do not include a screen or touch component. To return to the high-level testing checklist, see Certification Requirements for Custom Skills. - 4.1. Session Management - 4.2. Intent and Slot Combinations - 4.3. Intent Response (Design) - 4.4. Supportive Prompting - 4.5. Invocation Name - 4.6. One-Shot Phrasing for Sample Utterances - 4.7. Variety of Sample Utterances - 4.8. Intents and Slot Types - 4.9. Custom Slot Type Values - 4.10. Writing Conventions for Sample Utterances - 4.11. Error Handling - 4.12. Providing Help - 4.13. Stopping and Canceling - Appendix: Deprecated Test for Sample Utterances (Slot Type Values) - Next Steps 4.1. Session Management Every response sent from your skill to the Alexa service includes a flag indicating whether the conversation with the user (the session) should end or continue. If the flag is set to continue, Alexa then listens and waits for the user’s response. For Amazon devices such as Amazon Echo that have a blue light ring, the device lights up to give the user a visual cue that Alexa is listening for the user’s response. On Echo Show, the top of the screen flashes blue. This test verifies that the text-to-speech provided by your skill and the session flag work together for a good user experience. Responses that ask questions leave the session open for a reply, while responses that fulfill the user’s request close the session. 4.2. Intent and Slot Combinations A skill may have several intents and slots. This test verifies that each intent returns the expected response with different combinations of slots. You may want to use a table of intent and slot values to track this test and ensure that you test every intent and slot combination. For example: 4.3. Intent Response (Design) A good user experience for a skill depends on the skill having well-designed text-to-speech responses. Alexa Voice Design Guide: What Alexa Says provides recommendations for designing your skill’s responses. This test verifies that your skill’s responses meet these recommendations. You can use the same set of intent and slot combinations used for the Intent and Slot Combinations test. You can use the same set of intent and slot combinations used for the Intent Response (Intent and Slot Combinations) test. 4.4. Supportive Prompting A user can begin an interaction with your skill without providing enough information to know what they want to do. This might be either a no intent request (the user invokes the skill but does not specify any intent at all) or a partial intent request (the user specifies the intent but does not provide the slot values necessary to fulfill the request). In these cases, the skill must provide supportive prompts asking the user what they want to do. This test verifies that your skill provides useful prompts for these scenarios. LaunchRequestwith no intent) with a fact about space, then ends the session. For these skills, do the first test and verify that you get a complete response. See What Alexa Says for recommendations for designing prompts. 4.5. Invocation Name Users say the invocation name for a skill to begin an interaction. Inspect the skill’s invocation name and verify that it meets the invocation name requirements described in Choosing the Invocation Name for a Custom Skill. 4.6. One-Shot Phrasing for Sample Utterances Most skills provide quick, simple, “one-shot” interactions in which the user asks a question or gives a command, the skill responds with an answer or confirmation, and the interaction is complete. In these interactions, the user invokes your skill and states their intent all in a single phrase. The ask and tell phrases are the most natural phrases for starting these types of interactions. Therefore, it is critical that you write sample utterances that work well with these phrases and are easy and natural to say. In these tests, you review the sample utterances you’ve written for the skill, then test them by voice to ensure that they work as expected. 4.7. Variety of Sample Utterances Given the flexibility and variation of spoken language in the real world, there will often be many different ways to express the same request. Therefore, your sample utterances must include multiple ways to phrase the same intent. In this test, inspect the sample utterances for all intents, not just the “one shot” intents described in One-Shot Phrasing for Sample Utterances. 4.8. Intents and Slot Types Slots are defined with different types. Built-in types such as AMAZON.DATE convert the user’s spoken text into a different format (such as converting the spoken text “march fifth” into the date format “2017-03-05”). Custom slot types are used for items that are not covered by Amazon Alexa’s built-in types. For this test, review the intent schema and ensure that the correct slot types are used for the type of data the slot is intended to collect. Note that this test assumes you have migrated to the updated slot types as described in Migrating to the Improved Built-in and Custom Slot Types. If you are still using the previous version (for instance, DATE instead of AMAZON.DATE), then you need to also perform the Sample Utterances (Slot Type Values) test.). Slot Types: 4.9. Custom Slot Type Values The custom slot type is used for items that are not covered by Amazon’s built-in types and is recommended for most use cases where a slot value is one of a set of possible values. 4.10. Writing Conventions for Sample Utterances Sample utterances must be written according to defined rules in order to successfully build a speech model for your skill. 4.11. Error Handling Unlike a visual interface, where the user can only interact with the objects presented on the screen, there is no way to limit what users can say in a speech interaction. Your skill needs to handle a variety of errors in an intelligent and user-friendly way. This test verifies your skill’s ability to handle common errors. For more information on validating user input, please see Handling Possible Input Errors. 4.12. Providing Help A skill must have a help intent that can provide additional instructions for navigating and using the skill. Implement the AMAZON.HelpIntent to provide this. You do not need to provide your own sample utterances for this intent, but you do need to implement it in the code for your skill. For details, see Implementing the Built-in Intents. This test verifies that this intent exists and provides useful information. For more about designing help for your skill, see What Alexa Says. 4.13. Stopping and Canceling Your skill must respond appropriately to common utterances for stopping and canceling actions (such as “stop,” “cancel,” “never mind,” and others). The built-in AMAZON.StopIntent and AMAZON.CancelIntent intents provide these utterances. In most cases, these intents should just exit the skill, but you can map them to alternate functionality if it makes sense for your particular skill. See Implementing the Built-in Intents. Appendix: Deprecated Test for Sample Utterances (Slot Type Values) If all of your slots use the newer slot types with the AMAZON namespace (such as AMAZON.DATE), you do not need to do this test. In previous versions of the Alexa Skills Kit, it was necessary to include slot values showing different ways of phrasing the slot data in your sample utterances. For example, sample utterances for a DATE slot were written like this: OneshotTideIntent when is high tide on {january first|Date} OneshotTideIntent when is high tide {tomorrow|Date} OneshotTideIntent when is high tide {saturday|Date} ...(many more utterances showing different ways to say the date) If your skill still uses this syntax for the built-in slot types, you need to review the sample slot values in your sample utterances. We strongly recommend migrating to the updated slot types that no longer require the sample values.
https://developer.amazon.com/docs/custom-skills/voice-interface-and-user-experience-testing-for-a-custom-skill.html
CC-MAIN-2017-43
en
refinedweb
Opened 11 years ago Closed 11 years ago #2363 closed enhancement (fixed) [patch] Tighten subclass check at top of db.models.base.ModelBase.__new__. Description The "not bases or bases == (object,)" check at the top of ModelBase.new assumes that if the class-to-be has any bases, that one of them will be Model. It cropped up while writing a mixin superclass for models that provided some new service-ish functionality for my app. The mixin requires a metaclass of its own. To satisfy guido's metaclass resolution rules, my mixin's metaclass extends ModelBase. The catch: when evaluating my mixin definition, ModelBase is invoked. My mixin has a superclass, so it slips past the check, and things explode. My own metaclass can opt to check the Model subclass condition itself, and not call super(..).new, but that is hackish. Please consider the attached patch instead. Thanks for listening! Attachments (4) Change History (15) Changed 11 years ago by comment:1 Changed 11 years ago by comment:2 Changed 11 years ago by comment:3 Changed 11 years ago by This is a bug in our implementation of meta-classes and respecting sub-classing. Regardless of particular use-cases we know about, we should fix it. I'll take care of it. comment:4 Changed 11 years ago by comment:5 Changed 11 years ago by Checking on name == "Model" is even 'hackisher' IMHO. I have a Model named Model.. This check doesn't take the difference between django.db.models.Model and project.app.models.Model in account and django.db.models.ForeignKey(Model) breaks. from django.db import models class Measures(models.Model): top = models.IntegerField() waist = models.IntegerField() hip = models.IntegerField() def is_major_turnon(): import random return random.choice((True, False)) #Mind over matter ;) class Model(models.Model): name = models.CharField(maxlength=25) measures = models.ForeignKey(Measures) class AmericasTopModel(models.Model): season = models.IntegerField() model = models.ForeingKey(Model) def program_sucks(): return True This is not my real app, just some witty example. My real app records devices and their makes and models. comment:6 follow-up: 7 Changed 11 years ago by That name check is there to prevent a chicken-and-egg problem when the definition of (django's) Model itself triggers metaclass evaluation. I don't like it either, but there needs to be a guard against the actual reference later in the line. A better way may be to just try resolving the reference to Model in a try. If the metaclass truly is evaluating django's Model class, a NameError will be raise, and we can know to ignore this type. (I'll put up a patch when I get home...) Changed 11 years ago by Even tighter check. With test. comment:7 Changed 11 years ago by Replying to phil.h.smith@gmail.com: I've uploaded a slight mod of your patch with some added testcode harnassing my situation. I don't know if "invalid_models" is the right place for such a test, but I saw some other valid models in the file. comment:8 follow-up: 9 Changed 11 years ago by I'm kinda leery of adding the module check: if the name check is hackish, wouldn't it be more of the same? I concede that the only cases that would defeat it are pathological.?) Changed 11 years ago by now with less name checks Changed 11 years ago by Merge of phil.h.smith@… cleaner code and my test. comment:9 Changed 11 years ago by Replying to phil.h.smith@gmail.com: I'm kinda leery of adding the module check: if the name check is hackish, wouldn't it be more of the same? I concede that the only cases that would defeat it are pathological. I agree, it's just what the first patch meant to say: "Is this Model which is defined elsewhere in this module?" It's arguably more explicit then catching the NameError, but the comment solves that.?) Tapas bars can be very verocious. I have one living around the corner, which makes my residence in it's hunting grounds. ;) The patch tested out fine. I've merged the code with my testcode. Maybe you can add some of your mix-in code to the test too? I've never written a mix-in in Python, but could this be a start for a test? class PurrMixin(): """Working Cars""" def make_noise(self): return "VROOOM" class RacketMixin(): """Car Wrecks""" def make_noise(self): return "RRRRRR crack BANG RRRR" class HummingMixin(): """Fancy Hybrid Cars""" running_motor = None def make_noise(self): if self.running_motor == 'Combustion': return "VROOOM" else: return "mmmmmm" comment:10 Changed 11 years ago by In principle, this looks like the right solution. Thanks, guys. :-) A shame that Python doesn't let use access the thing raising the name error more easily for tighter checking (by including it as an attribute in the NameError class), but I don't want to start matching against the string version of the exception, so we'll leave that. I'll check this in when I get a chance. The need that led me to file the issue will be addressed if/when model inheritance is finished. Particularly, if it supports multiple abstract base classes for mixing in columns.
https://code.djangoproject.com/ticket/2363
CC-MAIN-2017-43
en
refinedweb
Classifications were introduced in the Atlas Types section as one of the composite metatypes, along with Entities and Structs. Classifications share similarities with these other composite metatypes in that they define a Type and have a uniquely identifiable name in the type system. They can also have a set of attributes, although these attributes can only be of native types. Like Entities, Classifications can extend from other super Classifications, and thus inherit attributes defined in those super Classifications. However, unlike Entities, Classification instances are not entities. They do not have a uniquely-identifiable GUID, and consequently they cannot be referenced from attributes in other types. Therefore, the way in which a Classification instance is defined and used is different than the way in which an entity is defined and used. Classification instances also have one other special significance in Atlas. They can be associated with any entity in Atlas without prior declaration of this fact in the Type definition of the entity. Note that, in contrast, defining an entity reference in a type must declared a priori (for example, HBase table references to an HBase namespace should be declared up-front). The Atlas type system recognizes Classifications, and includes specific APIs that can be used to associate Classifications with entities. Description: Because Classifications are Atlas Types, the same APIs used to create Types are used to create Classifications, except that Attribute definitions cannot refer to non-native metatypes. Request: POST http://<atlas-server-host:port>/api/atlas/v2/types/typedefs Request Body: The body for this request is the same structure as the TypesDef structure that is defined in Important Atlas API Data Types. The Classifications should be defined under the classificationDefs attribute. Response: The response is the same as the response for a Type Definition request, and contains the names of the defined Classifications. Example: Using our running example, we will define two Classifications: PublicData– Any metadata marked with this Classification indicates that this data was collected from publicly available sources. Therefore, any policies applicable to publicly collected data can be applied to this data. Retainable– This Classification indicates that any metadata associated with this Classification should be retained for a period of time. The time period is maintained in an a retentionPeriodattribute, which is the duration in days. Example Request Body: { "enumDefs":[], "structDefs":[], "classificationDefs":[ { "superTypes":[], "category":"CLASSIFICATION", "name":"PublicData", "description":null, "attributeDefs":[] }, { "superTypes":[], "category":"CLASSIFICATION", "name":"Retainable", "description":null, "attributeDefs":[ { "name":"retentionPeriod", "typeName":"int", "isOptional":false, "isUnique":false, "isIndexable":true, "constraints":null } ] } ], "entityDefs":[] } Note that the classificationDefs attribute contains the defined Classifications. The rest of the metatypes – structs, enums, and Entities – are empty. The retentionPeriod attribute is defined as an int in the Retainable Classification. Example Response: { "requestId": "qtp221036634-18 - 59cbed8a-3637-496f-8b40-80ec829ce493", "types": [ { "name": "Retainable" }, { "name": "PublicData" } ] } Description: Because Classifications are a specific metatype (like Entities), the same API used to list a specific metatype can be used to list Classifications. Request: GET http://<atlas-server-host:port>/api/atlas/types?type=CLASSIFICATION Response: The response is a list of Classification names. Example Response: { "results": [ "Retainable", "PublicData" ], "count": 2, "requestId": "qtp221036634-16 - 423d9f90-79ae-4b29-b9bf-2d2a1d05c2bd" } Description: Because Classifications are a specific metatype (like Entities), the same API used to retrieve a specific metatype can be used to retrieve a Classification. Request: GET http://<atlas-server-host:port>/api/atlas/types/{Classification_name} Response: The response for this request is the same structure as the TypesDef structure that is defined in Important Atlas API Data Types. The ClassificationTypes attribute contains the type definition of the Classification specified in the request. Example Request: GET http://<atlas-server-host:port>/api/atlas/types/Retainable Example Response: { "typeName": "Retainable", "definition": { "enumTypes": [], "structTypes": [], "ClassificationTypes": [ { "superTypes": [], "hierarchicalMetaTypeName": "org.apache.atlas.typesystem.types.ClassificationType", "typeName": "Retainable", "typeDescription": null, "attributeDefinitions": [ { "name": "retentionPeriod", "dataTypeName": "int", "multiplicity": "required", "isComposite": false, "isUnique": false, "isIndexable": true, "reverseAttributeName": null } ] } ], "classTypes": [] }, "requestId": "qtp221036634-204 - b9f43388-49d8-452b-8901-d05581d2b442" } Description: To catalog entities using Classifications, we must associate Classification instances with entities. Request: POST http://<atlas-server-host:port>/api/atlas/entities/{entity_guid}/Classifications Request Body: The request body is a Classification InstanceDefinition structure that is defined in Important Atlas API Data Types. Response: No data is returned in the response. A 201 status code indicates success. Example: In this example, we annotate our webtable (GUID f4019a65-8948-46f1-afcf-545baa2df99f) with PublicData Classification to indicate that it is a data asset that is created by crawling public sites. We also set a Retainable Classification on the column family contents (GUID 9e6308c6-1006-48f8-95a8-a605968e64d2) with a retention period of 100 days. The following requests would be sent: Example Request: POST http://<atlas-server-host:port>/api/atlas/entities/f4019a65-8948-46f1-afcf-545baa2df99f/Classifications Example Request Body: { "jsonClass":"org.apache.atlas.typesystem.json.InstanceSerialization$_Struct", "typeName":"PublicData", "values":{ } } Example Request: POST http://<atlas-server-host:port>/api/atlas/entities/9e6308c6-1006-48f8-95a8-a605968e64d2/Classifications Example Request Body: { "jsonClass":"org.apache.atlas.typesystem.json.InstanceSerialization$_Struct", "typeName":"Retainable", "values":{ "retentionPeriod":"100" } } Description: When Classification instances are associated with entities according to structure that is defined in Important Atlas API Data Types, the EntityDefinition includes the ClassificationNames and Classifications attributes. A request for an EntityDefinition returns a response that includes the ClassificationNames and Classification values. Request: GET http://<atlas-server-host:port>/api/atlas/entities/{entity_guid} Example Request: This is a request for an HBase table EntityDefinition. GET http://<atlas-server-host:port>/api/atlas/entities/f4019a65-8948-46f1-afcf-545baa2df99f Example Response: For the sake of brevity, only the ClassificationNames and Classification values are shown below. { ... "typeName": "hbase_table", "values": { ... "columnFamilies": [ { "typeName": "hbase_column_family", "values": { "qualifiedName": "default.webtable.contents@cluster2", }, "ClassificationNames": [ "Retainable" ], "Classifications": { "Retainable": { "jsonClass": "org.apache.atlas.typesystem.json.InstanceSerialization$_Struct", "typeName": "Retainable", "values": { "retentionPeriod": 100 } } } } ], "qualifiedName": "default.webtable@cluster2", ... "ClassificationNames": [ "PublicData" ], "Classifications": { "PublicData": { "jsonClass": "org.apache.atlas.typesystem.json.InstanceSerialization$_Struct", "typeName": "PublicData", "values": {} } } } } Description: This is a simple DELETE operation. Request: DELETE http://<atlas-server-host:port>/api/atlas/entities/{entity_guid}/Classifications/{Classification_name} Response: No data is returned. Example Request: DELETE http://<atlas-server-host:port>/api/atlas/entities/f4019a65-8948-46f1-afcf-545baa2df99f/Classifications/PublicData
https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.1/bk_data-governance/content/atlas_catalog_metadata_classifications.html
CC-MAIN-2017-43
en
refinedweb
When two pure tones are nearly in tune, you hear beats. The perceived pitch is the average of the two pitches, and you hear it fluctuate as many times per second as the difference in frequencies. For example, an A 438 and an A 442 together sound like an A 440 that beats four times per second. (Listen) As the difference in pitches increases, the combined tone sounds rough and unpleasant. Here are sound files combining two pitches that differ by 16 Hz and 30 Hz. 16 Hz: 30 Hz: The sound becomes more pleasant as the tones differ more in pitch. Here’s an example of pitches differing by 100 Hz. Now instead of hearing one rough tone, we hear two distinct tones in harmony. The two notes are at frequencies 440-50 Hz and 440+50 Hz, approximately the G and B above middle C. 100 Hz: If we separate the tones even further, we hear one tone again. Here we separate the tones by 300 Hz. Now instead of hearing harmony, we hear only the lower tone, 440+150 Hz. The upper tone, 440+150 Hz, changes the quality of the lower tone but is barely perceived directly. 300 Hz: We can make the previous example sound a little better by making the separation a little smaller, 293 Hz. Why? Because now the two tones are an octave apart rather than a little more than an octave. Now we hear the D above middle C. 293 Hz: Update: Here’s a continuous version of the above examples. The separation of the two pitches at time t is 10t Hz. Continuous: Here’s Python code that produced the .wav files. (I’m using Python 3.5.1. There was a comment on an earlier post from someone having trouble using similar code from Python 2.7.) from scipy.io.wavfile import write from numpy import arange, pi, sin, int16, iinfo N = 48000 # sampling rate per second x = arange(3*N) # 3 seconds of audio def beats(t, f1, f2): return sin(2*pi*f1*t) + sin(2*pi*f2*t) def to_integer(signal): # Take samples in [-1, 1] and scale to 16-bit integers m = iinfo(int16).max M = max(abs(signal)) return int16(signal*m/M) def write_beat_file(center_freq, delta): f1 = center_freq - 0.5*delta f2 = center_freq + 0.5*delta file_name = "beats_{}Hz_diff.wav".format(delta) write(file_name, N, to_integer(beats(x/N, f1, f2))) write_beat_file(440, 4) write_beat_file(440, 16) write_beat_file(440, 30) write_beat_file(440, 100) write_beat_file(440, 293) In my next post on roughness I get a little more quantitative, giving a power law for roughness of an amplitude modulated signal. Related: Psychoacoustics consulting 4 thoughts on “Acoustic roughness” Are the files for 293hz vs 300hz swapped? I can hear two tones in with the 300hz file, but only one in the 293hz file. Anon: The files are not swapped. There is a hint of the higher tone in the 300 Hz file, but not in the 293 Hz file, at least as I hear it. I suggest you try and make the same thing with a triangle wave. I suppose the harmonic content of the interference would be rather interesting! The script works in Python 2.7 if “from __future__ import division” is included at the top. The issue is with the “x/N” in write_beat_file, since both x and N are ints. Maybe also the “m/M” in to_integer, but M is probably a float.
https://www.johndcook.com/blog/2016/03/30/acoustic-roughness/
CC-MAIN-2017-43
en
refinedweb
...one of the most highly regarded and expertly designed C++ library projects in the world. — Herb Sutter and Andrei Alexandrescu, C++ Coding Standards namespace boost { class none_t {/* see below */}; const none_t none (/* see below */); } // namespace boost Class none_t is meant to serve as a tag for selecting appropriate overloads of from optional's interface. It is an empty, trivially copyable class with disabled default constructor. Constant none is used to indicate an optional object that does not contain a value in initialization, assignment and relational operations of optional.
http://www.boost.org/doc/libs/1_65_0/libs/optional/doc/html/optional/reference.html
CC-MAIN-2017-43
en
refinedweb
A markdown text extractor written in Rust Project description The stupid simple rust markdown text puller Basically, this has one function and one function alone. It takes all the human readable text from Markdown, and extracts it. That's it. Why does this library exist? I needed it for a project, and pip was the easiest place to put it. What can I do with this library? Literally anything. I'm not fussed. It's ten lines of code. Installation pip install rust-markdown-text-puller Usage from rust_markdown_text_puller import get_raw_text print(get_raw_text("# Raw Markdown!")) It really is that simple License MIT license. See the LICENSE file for details Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/rust-markdown-text-puller/
CC-MAIN-2020-05
en
refinedweb
Repository: kafka Updated Branches: refs/heads/trunk 64bab8052 -> 3d7e88456 MINOR: Rephrase Javadoc summary for ConsumerRecord The original Javadoc description for `ConsumerRecord` is slightly confusing in that it can be read in a way such that an object is a key value pair received from Kafka, but (only) consists of the metadata associated with the record. This PR makes it clearer that the metadata is _included_ with the record, and moves the comma so that the phrase "topic name and partition number" in the sentence is more closely associated with the phrase "from which the record is being received". Author: LoneRifle <LoneRifle@users.noreply.github.com> Reviewers: Ismael Juma <ismael@juma.me.uk>, Ewen Cheslack-Postava <ewen@confluent.io> Closes #2290 from LoneRifle/patch-1 Project: Commit: Tree: Diff: Branch: refs/heads/trunk Commit: 3d7e88456f14fbac66309973f1334cb32c10e3ea Parents: 64bab80 Author: LoneRifle <LoneRifle@users.noreply.github.com> Authored: Thu Dec 29 21:14:26 2016 -0800 Committer: Ewen Cheslack-Postava <me@ewencp.org> Committed: Thu Dec 29 21:14:26 2016 -0800 ---------------------------------------------------------------------- .../java/org/apache/kafka/clients/consumer/ConsumerRecord.java | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) ---------------------------------------------------------------------- ---------------------------------------------------------------------- diff --git a/clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerRecord.java b/clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerRecord.java index c8aef53..5f10155 100644 --- a/clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerRecord.java +++ b/clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerRecord.java @@ -16,8 +16,9 @@ import org.apache.kafka.common.record.Record; import org.apache.kafka.common.record.TimestampType; /** - * A key/value pair to be received from Kafka. This consists of a topic name and a partition number, from which the - * record is being received and an offset that points to the record in a Kafka partition. + * A key/value pair to be received from Kafka. This also consists of a topic name and + * a partition number from which the record is being received, an offset that points + * to the record in a Kafka partition, and a timestamp as marked by the corresponding ProducerRecord. */ public class ConsumerRecord<K, V> { public static final long NO_TIMESTAMP = Record.NO_TIMESTAMP;
http://mail-archives.eu.apache.org/mod_mbox/kafka-commits/201612.mbox/%3C0f617ebcdf764d63bbbf0b043aca55c3@git.apache.org%3E
CC-MAIN-2020-05
en
refinedweb
location_background_plugin 0.1.0+3 Flutter Background Execution Sample - LocationBackgroundPlugin # An example Flutter plugin that showcases background execution using iOS location services. This plugin is not being actively maintained and is not for production use. An archive of previous versions can be found in the Flutter plugins repository. Getting Started # NOTE: This plugin does not currently have an Android implementation. To import, add the following to your Dart file: import 'package:location_background/location_background.dart'; Example usage: import 'package:location_background/location_background.dart'; final locationManager = LocationBackgroundPlugin(); void locationUpdateCallback(Location location) { print('Location Update: $location'); } Future<void> startMonitoringLocationChanges() => locationManager.monitorSignificantLocationChanges(locationUpdateCallback); Future<void> stopMonitoringLocationChanges() => locationManager.cancelLocationUpdates(); WARNING: do not maintain volatile state or perform long running operations in the location update callback. There is no guarantee from the system for how long a process can perform background processing after a location update, and the Dart isolate may shutdown during execution at the request of the system. For help getting started with Flutter, view our online documentation. For help on editing plugin code, view the documentation. 0.1.0+3 # - Update README.md indicating package support status. 0.1.0+2 # - Fix Dart deprecation warnings..2 # - Added missing flutter_test package dependency. - Added missing flutter version requirements. 0.0.1 # - Initial release. Flutter Background Plugin Example for iOS. # Demonstrates how to use the LocationBackgroundPlugin, a sample plugin showcasing Flutter background execution on iOS. Getting Started # For help getting started with Flutter, view our online documentation. Use this package as a library 1. Depend on it Add this to your package's pubspec.yaml file: dependencies: location_background_plugin: ^0.1:location_background_plugin/location_background_plugin.dart'; We analyzed this package on Jan 16, 2020, and provided a score, details, and suggestions below. Analysis was completed with status completed using: - Dart: 2.7.0 - pana: 0.13.4 - Flutter: 1.12.13+hotfix.5.
https://pub.dev/packages/location_background_plugin
CC-MAIN-2020-05
en
refinedweb
prompter_wa 0.0.1 example/main.dart import 'package:prompter_wa/prompter_wa.dart'; void main(){ var options = [ Option("I want red","#f00"), Option("I want blue","#00f"), ]; final prompter = Prompter(); String colorCode = prompter.askMultiple("Select a color", options); bool choice = prompter.askBoolean("Do you like Dart?"); print(colorCode); print(choice); } Use this package as a library 1. Depend on it Add this to your package's pubspec.yaml file: dependencies: prompter_wa/prompter_wa/src/Option.dart. Run dartfmt to format lib/src/Option.dart. Format lib/src/Prompter.dart. Run dartfmt to format lib/src/Prompter.dart. Format lib/src/Terminal.dart. Run dartfmt to format lib/src/Terminal.dart..
https://pub.dev/packages/prompter_wa
CC-MAIN-2020-05
en
refinedweb
Picking apart some cool code to draw pretty pictures with code that fits into 3 tweets: This comes from a codegolf question here: Tweetable Mathematical Art. Slides: Tweetable Art Code. Picking apart some cool code to draw pretty pictures with code that fits into 3 tweets: This comes from a codegolf question here: Tweetable Mathematical Art. Slides: Tweetable Art Code. As it stands, lxpanel’s taskbar plugin (part of the LXDE desktop you get with Lubuntu) does not show urgent (flashing) windows if they are on a different virtual desktop from the one you are looking at. This seems wrong to me (I logged Bug 682), and the fix is a little one-liner: --- lxpanel.orig/src/plugins/taskbar.c 2013-08-27 23:57:55.000000000 +0100 +++ lxpanel/src/plugins/taskbar.c 2014-09-26 00:48:25.026855589 +0100 @@ -202,10 +202,10 @@ tk->flash_timeout = g_timeout_add(interval, (GSourceFunc) flash_window_timeout, tk); } -/* Determine if a task is visible considering only its desktop placement. */ +/* Determine if a task is visible considering only its desktop placement and urgency. */ static gboolean task_is_visible_on_current_desktop(TaskbarPlugin * tb, Task * tk) { - return ((tk->desktop == ALL_WORKSPACES) || (tk->desktop == tb->current_desktop) || (tb->show_all_desks)); + return ((tk->desktop == ALL_WORKSPACES) || (tk->desktop == tb->current_desktop) || (tb->show_all_desks) || tk->urgency); } /* Recompute the visible task for a class when the class membership changes. To install this patch into Lubuntu, do something like this: sudo apt-get install build-essential fakeroot dpkg-dev sudo apt-get build-dep lxpanel mkdir lxpanel cd lxpanel apt-get source lxpanel cd lxpanel-* wget patch -p1 < show-urgent-windows-on-all-desktops.patch dpkg-buildpackage -rfakeroot -b cd .. sudo dpkg -i lxpanel_*.deb killall lxpanel lxpanel --profile Lubuntu A quine is a program that prints out its own source code. I will describe five examples: Slides: Five Quines Arguably the greatest program ever written: More info on quines: Posts in this series: Syntax, Deployment, Metaprogramming, Ownership There is often a trade-off between programming language features and how fast (and predictably) the programs run. From web sites that serve millions of visitors to programs running on small devices we need to be able to make our programs run quickly. One trade-off that is made in many modern programming languages (including Python, Ruby, C#, Java and JVM-based languages) is that the system owns all the memory. This avoids the need for the programmer to think about how long pieces of memory need to live, but it means a lot of memory can hang around a lot longer than it really needs to. In addition, it can mean the CPU has to jump around to lots of different memory locations to find pieces of dynamically-allocated memory in different locations. Where this jumping around causes caches to be invalidated that can really slow things down. While these garbage collection-based languages have been evolving, C++ has been developing along a different track. C++ allows the programmer to allocate and free up memory manually (as in C), but over time the community of C++ programmers has been developing a new way of thinking about memory, and developing tools in the C++ language to make it easier to work in this way. Modern C++ code rarely or never uses “delete” or “free” to deallocate memory, but instead defines clearly which object owns each other object. When the owning object is no longer needed, everything it owns can be deleted, immediately freeing their memory. The top-level objects are owned by the current scope, so when the function or block of code we are in ends, the system knows these objects and the ones they own can be deleted. Objects that last for the whole life of the program are owned by the scope of the main function or equivalent. One advantage of explicit ownership is that the right thing happens automatically when something unexpected happens (e.g. an exception is thrown, or we return early from a function). Because the objects are owned by a scope, as soon as we exit that scope they are automatically deleted, and no memory is “leaked”. Because ownership is explicit, we can often group owned objects in memory immediately next to the objects that own them. This means we jump around to different memory locations less often, and we have to do less work to find and delete regions of memory. This makes our programs faster. Here are some things I like: In my experience, the most frequent performance problems I have had to solve have really been memory problems. Explicit ownership can reduce unnecessary memory management overhead by taking back the work from the system (the garbage collector) and allowing programmers to be explicit about who owns what..
https://www.artificialworlds.net/blog/category/plain-c/
CC-MAIN-2020-05
en
refinedweb
pthread_abort() Unconditionally terminate the target thread Synopsis: #include <pthread.h> int pthread_abort( pthread_t thread ); Arguments: - thread - The ID of the thread that you want to terminate, which you can get when you call pthread_create() or pthread_self() . Library: libc Use the -l c option to qcc to link against this library. This library is usually included automatically. Description: The pthread_abort() function terminates the target thread. Termination takes effect immediately and isn't a function of the cancelability state of the target thread. No cancellation handlers or thread-specific-data destructor functions are executed. Thread abortion doesn't release any application-visible process resources, including, but not limited to, mutexes and file descriptors. (The behavior of POSIX calls following a call to pthread_abort() is unspecified.) The status of PTHREAD_ABORTED is available to any thread joining with the target thread. The constant PTHREAD_ABORTED expands to a constant expression, of type void *. Its value doesn't match any pointer to an object in memory, or the values NULL and PTHREAD_CANCELED. The side effects of aborting a thread that's suspended during a call of a POSIX 1003.1 function are the same as the side effects that may be seen in a single-threaded process when a call to a POSIX 1003.1 function is interrupted by a signal and the given function returns EINTR. Any such side effects occur before the thread terminates. Returns: - EOK - Success. - ESRCH - No thread with the given ID was found.
https://developer.blackberry.com/playbook/native/reference/com.qnx.doc.neutrino.lib_ref/topic/p/pthread_abort.html
CC-MAIN-2020-05
en
refinedweb
Analytics Collecting analytics data for your app can be accomplished with Amazon Pinpoint contains your app .xcodeprojfile), as follows. - The Podfilethat you configure to install the AWS Mobile SDK must contain: platform :ios, '9.0' target 'YourAppName' do use_frameworks! pod 'AWSPinpoint', '~> 2.12.0' pod 'AWSMobileClient', '~> 2.12.0' # other pods end Run pod install --repo-update before: /** start code copy **/ import AWSPinpoint import AWSMobileClient /** end code copy **/ - To send events with Amazon Pinpoint, you’ll instantiate a Pinpoint instance. We recommend you do this during app startup, so you can use Pinpoint to record app launch analytics.. Reporting Events in Your Application You can use the Pinpoint SDK AWS Mobile SDKs and the AWS Amplify JavaScript libraries,. These are automatically recorded when you integrate your iOS app with the Pinpoint SDK as shown above.. // You can add this function in desired part of your app. // It will be used to log events to the backend. func logEvent() { if let analyticsClient = pinpoint?.analyticsClient { let event = analyticsClient.createEvent(withEventType: "EventName") event.addAttribute("DemoAttributeValue1", forKey: "DemoAttribute1") event.addAttribute("DemoAttributeValue2", forKey: "DemoAttribute2") event.addMetric(NSNumber(value: arc4random() % 65535), forKey: "EventName") analyticsClient.record(event) analytics. func sendMonetizationEvent() { if let analyticsClient = pinpoint?.analyticsClient { let event = analyticsClient.createVirtualMonetizationEvent( withProductId: "DEMO_PRODUCT_ID", withItemPrice: 1.00, withQuantity: 1, withCurrency: "USD" ) analyticsClient.record(event) analyticsAppIdkey iOS. func sendUserSignInEvent() { if let analyticsClient = pinpoint?.analyticsClient { let event = analyticsClient.createEventWithEventType("_userauth.sign_in") analyticsClient.record(event) analyticsClient.submitEvents() } } Event Ingestion Limits The limits applicable to the ingestion of events using the AWS Android SDK for Pinpoint and the Amazon Pinpoint Events API can be found here.. // Add a custom attribute to the endpoint if let targetingClient = pinpoint?.targetingClient { targetingClient.addAttribute(["science", "politics", "travel"], forKey: "interests") targetingClient.updateEndpointProfile() let endpointId = targetingClient.currentEndpointProfile().endpointId print("Updated custom attributes for endpoint: \(endpoint. if let targetingClient = pinpoint?.targetingClient { let endpoint = targetingClient.currentEndpointProfile() // Create a user and set its userId property let user = AWSPinpointEndpointProfileUser() user.userId = "UserIdValue" // Assign the user to the endpoint endpoint.user = user // Update the endpoint with the targeting client targetingClient.update(endpoint) print("Assigned user ID \(user.userId ?? "nil") to endpoint \(endpoint.endpointId)") } Endpoint Limits The limits applicable to the endpoints using the AWS Android SDK for Pinpoint and the Amazon Pinpoint Endpoint API can be found here. Using Amazon Kinesis The two classes AWSKinesisRecorder and AWSFire AWSKinesisRecorder client lets you store PutRecord requests on disk and then send them all at once. This is useful because many mobile applications that use Amazon Kinesis will create multiple PutRecord requests per second. Sending an individual request for each AWSFirehoseRecorder client lets you store PutRecords requests on disk and then send them using Kinesis Data Firehose PutRecordBatch. For more information about Amazon Kinesis Firehose, see Amazon Kinesis Firehose. Integrating Amazon Kinesis and Amazon Kinesis Firehose Add the following to your Podfile: pod 'AWSKinesis', '~> 2.12.0' The instructions direct you to import the headers for the services you’ll be using. For this example, you need the following import. import AWSKinesis Once you have credentials, you can use AWSKinesisRecorder with Amazon Kinesis. The following snippet returns a shared instance of the Amazon Kinesis service client: let kinesisRecorder = AWSKinesisRecorder.default() You can use AWSFirehoseRecorder with Amazon Kinesis Firehose. The following snippet returns a shared instance of the Amazon Kinesis Firehose service client: let firehoseRecorder = AWSFirehoseRecorder.default() Configure Kinesis: You can configure AWSKinesisRecorder or AWSFirehoseRecorder through their properties: kinesisRecorder.diskAgeLimit = TimeInterval(30 * 24 * 60 * 60); // 30 days kinesisRecorder.diskByteLimit = UInt(10 * 1024 * 1024); // 10MB kinesisRecorder.notificationByteThreshold = UInt(5 * 1024 * 1024); // 5MB The diskAgeLimit property sets the expiration for cached requests. When a request exceeds the limit, it’s discarded. The default is no age limit. The diskByteLimit property holds the limit of the disk cache size in bytes. If the storage limit is exceeded, older requests are discarded. The default value is 5 MB. Setting the value to 0 means that there’s no practical limit. The notficationByteThreshold property sets the point beyond which Kinesis issues a notification that the byte threshold has been reached. The default value is 0, meaning that by default Amazon Kinesis doesn’t post the notification. To see how much local storage is being used for Amazon Kinesis PutRecord requests, check the diskBytesUsed property. With AWSKinesisRecorder created and configured, you can use saveRecord() to save records to local storage. let yourData = "Test_data".data(using: .utf8) kinesisRecorder.saveRecord( yourData, streamName: "YourStreamName") In the preceding example, we create an NSData object and save it locally. YourStreamName should be a string corresponding to the name of your Kinesis stream. You can create new streams in the Amazon Kinesis console. Here is a similar snippet for Amazon Kinesis Firehose: let yourData = "Test_data".data(using: .utf8) firehoseRecorder.saveRecord( yourData, streamName: "YourStreamName") To submit all the records stored on the device, call submitAllRecords. kinesisRecorder.submitAllRecords() firehoseRecorder.submitAllRecords() submitAllRecords sends all locally saved requests to the Amazon Kinesis service. Requests that are successfully sent will be deleted from the device. Requests that fail because the device is offline will be kept and submitted later. Invalid requests are deleted. Both saveRecord and submitAllRecords are asynchronous operations, so you should ensure that saveRecord is complete before you invoke submitAllRecords. The following code sample shows the methods used correctly together. // Create an array to store a batch of objects. var tasks = Array<AWSTask<AnyObject>>() for i in 0...100 { tasks.append(kinesisRecorder!.saveRecord(String(format: "TestString-%02d", i).data(using: .utf8), streamName: "YourStreamName")!) } AWSTask(forCompletionOfAllTasks: tasks).continueOnSuccessWith(block: { (task:AWSTask<AnyObject>) -> AWSTask<AnyObject>? in return kinesisRecorder?.submitAllRecords() }).continueWith(block: { (task:AWSTask<AnyObject>) -> Any? in if let error = task.error as? NSError { print("Error: \(error)") } return nil }) To learn more about working with Amazon Kinesis, see the Amazon Kinesis Developer Resources. To learn more about the Amazon Kinesis classes, see the class reference for AWSKinesisRecorder. To learn more about the Amazon Kinesis Firehose classes, see the class reference for AWSFirehoseRecorder.
https://aws-amplify.github.io/docs/sdk/ios/analytics
CC-MAIN-2020-05
en
refinedweb
Github user mxmrlv commented on a diff in the pull request: --- Diff: aria/orchestrator/workflows/builtin/utils.py --- @@ -14,33 +14,38 @@ # limitations under the License. from ..api.task import OperationTask +from .. import exceptions -def create_node_task(operation_name, node): +def create_node_task(interface_name, operation_name, node): """ Returns a new operation task if the operation exists in the node, otherwise returns None. """ - if _has_operation(node.interfaces, operation_name): - return OperationTask.node(instance=node, - name=operation_name) - return None + try: + return OperationTask.for_node(node=node, + interface_name=interface_name, + operation_name=operation_name) + except exceptions.TaskException: --- End diff -- This might catch things we don't want to. ---
http://mail-archives.apache.org/mod_mbox/ariatosca-dev/201703.mbox/%3C20170319133037.06661F4B61@git1-us-west.apache.org%3E
CC-MAIN-2017-39
en
refinedweb
I've been having a discussion with some of my colleagues regarding how good a job we've done (and are doing) at explaining what WinFX actually contains. In the interests of keeping everyone on the same page, I'm talking about a description of WinFX like this: WinFX is the .NET Framework 2.0 with the addition of four specific new technologies: Windows Presentation Foundation, Windows Communication Foundation, Windows Workflow Foundation, and Infocard (codename). The part I'm concerned about is the part in bold ie that WinFX is the .NET Framework...with additional pieces. Do you think that the bulk of developers realise this? It's really easy for us, who live with this stuff every day, to forget that most people have real lives to live and so might have missed the fact that WinFX is an evolution of the .NET Framework, so all of the good stuff you associate with .NET accrues to WinFX. It may be a non-issue but I wanted to see if anyone had experience of this being a problem...or whether it's just me. Feel free to email me or leave a comment if you have a view. why not WinFX= .NET 3.0? from your definition .NET 2.0 is .NET 1.0 with some new namespaces The additional pieces (WCF, WPF, WF, InfoCard) add considerably to the power of .NET and allow a whole new type of application to be built. So .NET 3.0 probably wouldn’t do it justice. Although it may have appeared that I underplayed the significance of these extra "pieces" in the post above, that wasn’t my intention. WinFX is a *whole* lot more than the .NET Framework, but my point was that it is built on top of the framework and everything you associate with that – managed code, CLR, multiple languages, etc – are inherent in WinFX as a result. So my concern is that people may not understand that, and if so we should be more explicit about it. I know what WinFX is but I disagree slightly on the fact that ‘WinFX is the .NET Framework 2.0 with the addition of four specific new technologies’. I consider Windows Forms to be part of the .NET framework but IMHO WPF is a complete replacement of (among other things) Windows Forms. Even if WinFX build upon the CLR 2.0 it replaces and obsoletes APIs and technologies that is installed as part of .NET Framework 2.0. I also feel like it’s the version 3.0 of framework. If it’s a feature of Windows, why not counting it up? Instead of naming it ".NET 2.0 plus", ".NET 3.0 plus" and so on… Johan – you can build Window Forms apps using WinFX. In fact, for many people it’s important to be able to do so as they have a big investment in Windows Forms. The two technologies are designed to work together, for example you can bring up a WPF window from your Windows Forms app, and place Windows Forms controls alongside WPF controls on a WPF window. application. So WinFX is indeed the .NET Framework 2.0 + the "foundation" products and infocard. Konstantin – the choice of name a branding issue, and not one that I feel qualified to say too much about. However I am concerned that if by changing the name we run the risk of confusing people. I’m sure our branding folks had good reasons for choosing the WinFX name, but we need to make sure it’s clear that WinFX is built upon the .NET Framework. I was confused for a long time up until about a month or two ago. Other devs I asked were also unsure exactly what WinFX was. The Is-A relationship is mostly accurate, but I prefer a hierarchical view: Thanks, Juval Lowy Isn’t this just about marketing? I noticed that the ".net" moniker was stripped from VS2005. So, the .net framework (.net fx) now becomes Windows Framework (winfx) – is that right? Reminds me of the old MTS / COM+ days. No-one knows what .net means any more, just the same way no-one knew what ActiveX meant. Thanks everyone for the comments. A couple of follow ups: Juval – that diagram you’ve linked to is a case in point: It suggests that WinFX is separate to the .NET Framework which is wrong. I don’t blame the people who created the diagram – I think we haven’t been as clear as we could be about this stuff. Trev – yes, you’re right, it’s marketing. And therefore should we care? I think we should if it means that people think that WinFX isn’t managed code, or is a different way of doing what .NET does. We need to be clear that WinFX = .NET. Thanks again all, this is really useful input. With the change of name to .NET 3, I wonder if Ian could revisit this blog posting and perhaps get it updated with the latest terminology. I think the naming of WinFX was a tad confusing in the first place – was it .NET or not? So the clarity resulting from the name change is useful. However, speaking personally, I found the WinFX story a bit more compelling than ".NET 3.0". That WINFX was all new was clearer from the name, but maybe I’m just still trying to get used to the latest name and contents changes. I find it particularly confusing in that .NET 3 is actually part new and part .NET 2.0. The constant changing of names and contents is also to be regretted. Thomas Thanks for the comments Thomas. The original post was made while we were discussing the name internally and I wanted to get a feel for the extent of people’s confusion and feed that back. And the replies to this post did help to express the level of confusion that existed. I think the .NET 3.0 name is a great result, and addressess the concerns I expressed in the original post. With regard to your point that .NET 3.0 is "part new and part .NET 2.0": Well, .NET 2.0 is of course largely .NET 1.1 with additions and extensions. I think this goes to the nub of the issue – WinFX was always simply an evolution of the .NET framework. And in the past, that has warranted a new version number, not a new name. And finally, these technologies are of course still in beta and this is the right time to finalize names, even if the ideal situation would have been to have selected the .NET 3.0 name from the outset. Of course, originally WinFX was only going to be for Vista, which no doubt had an impact on the name choice. Happy to discuss further Thomas, however to reiterate: I think this is the right decision and we’re now set to move forwards with clarity. Well, I’m really impressed by Ian’s observation: >>It’s really easy for us, who live with this stuff every day, to forget that most people have real lives to live << Microsoft, and now Sun, keep ramping up the number and breadth of development packages which play together and don’t – – it is quite dizzying to a developer who needs to support a number of platforms – or choose a platform. I wonder if the "real" marketing ploy is for a vendor to make so many packages which require so much investment to figure out, that a developer would stick with whatever we invest so much time getting familiar with. [ending on a preposition, geezsh :-)]. If that’s not the case, then much more time needs to be spent funneling in developers with varied familiarity — that is, if I need to look away from Microsoft for a year, I could really use great documentation to find my way back. I agree that this sounds like COM/DCOM/ActX, etc. IUnknown++ – bill
https://blogs.msdn.microsoft.com/ianm/2006/04/19/what-is-winfx-anyway/
CC-MAIN-2017-39
en
refinedweb
Windows Explorer, the Delete method may not be able to delete it. The following code example deletes the specified directory or throws an exception if there are subdirectories. using System; using System.IO; class Test { public static void Main() { // Specify the directories you want to manipulate. string path = @"c:\MyDir"; string subPath = @"c:\MyDir\temp"; try { // Determine whether the directory exists. if (!Directory.Exists(path)) { // Create the directory if it does not exist. Directory.CreateDirectory(path); } if (!Directory.Exists(subPath)) { // Create the subdirectory if it does not exist. Directory.CreateDirectory(subPath); } // This operation will not be allowed because there are subdirectories. Console.WriteLine("I am about to attempt to delete {0}.", path); Directory.Delete(path); Console.WriteLine("The Delete operation was successful, which was unexpected."); } catch (Exception) { Console.WriteLine("The Delete operation failed as expected."); } finally {} } } - FileIOPermission for writing to the specified.
https://msdn.microsoft.com/en-us/library/62t64db3(v=vs.100)
CC-MAIN-2017-39
en
refinedweb
Latest Jackson integration improvements in Spring Updated on 2015/08/31 with an additional Jackson modules section Spring Jackson support has been improved lately to be more flexible and powerful. This blog post gives you an update about the most useful Jackson related features available in Spring Framework 4.x and Spring Boot. All the code samples are coming from this spring-jackson-demo sample application, feel free to have a look at the code. JSON Views It can sometimes be useful to filter contextually objects serialized to the HTTP response body. In order to provide such capabilities, Spring MVC now has builtin support for Jackson’s Serialization Views (as of Spring Framework 4.2, JSON Views are supported on @MessageMapping handler methods as well). The following example illustrates how to use @JsonView to filter fields depending on the context of serialization - e.g. getting a “summary” view when dealing with collections, and getting a full representation when dealing with a single resource: public class View { interface Summary {} } public class User { @JsonView(View.Summary.class) private Long id; @JsonView(View.Summary.class) private String firstname; @JsonView(View.Summary.class) private String lastname; private String email; private String address; private String postalCode; private String city; private String country; } public class Message { @JsonView(View.Summary.class) private Long id; @JsonView(View.Summary.class) private LocalDate created; @JsonView(View.Summary.class) private String title; @JsonView(View.Summary.class) private User author; private List<User> recipients; private String body; } Thanks to Spring MVC @JsonView support, it is possible to choose, on a per handler method basis, which field should be serialized: @RestController public class MessageController { @Autowired private MessageService messageService; @JsonView(View.Summary.class) @RequestMapping("/") public List<Message> getAllMessages() { return messageService.getAll(); } @RequestMapping("/{id}") public Message getMessage(@PathVariable Long id) { return messageService.get(id); } } In this example, if all messages are retrieved, only the most important fields are serialized thanks to the getAllMessages() method annotated with @JsonView(View.Summary.class): [ { "id" : 1, "created" : "2014-11-14", "title" : "Info", "author" : { "id" : 1, "firstname" : "Brian", "lastname" : "Clozel" } }, { "id" : 2, "created" : "2014-11-14", "title" : "Warning", "author" : { "id" : 2, "firstname" : "Stéphane", "lastname" : "Nicoll" } }, { "id" : 3, "created" : "2014-11-14", "title" : "Alert", "author" : { "id" : 3, "firstname" : "Rossen", "lastname" : "Stoyanchev" } } ] In Spring MVC default configuration, MapperFeature.DEFAULT_VIEW_INCLUSION is set to false. That means that when enabling a JSON View, non annotated fields or properties like body or recipients are not serialized. When a specific Message is retrieved using the getMessage() handler method (no JSON View specified), all fields are serialized as expected: { "id" : 1, "created" : "2014-11-14", "title" : "Info", "body" : "This is an information message", "author" : { "id" : 1, "firstname" : "Brian", "lastname" : "Clozel", "email" : "[email protected]", "address" : "1 Jaures street", "postalCode" : "69003", "city" : "Lyon", "country" : "France" }, "recipients" : [ { "id" : 2, "firstname" : "Stéphane", "lastname" : "Nicoll", "email" : "[email protected]", "address" : "42 Obama street", "postalCode" : "1000", "city" : "Brussel", "country" : "Belgium" }, { "id" : 3, "firstname" : "Rossen", "lastname" : "Stoyanchev", "email" : "[email protected]", "address" : "3 Warren street", "postalCode" : "10011", "city" : "New York", "country" : "USA" } ] } Only one class or interface can be specified with the @JsonView annotation, but you can use inheritance to represent JSON View hierarchies (if a field is part of a JSON View, it will be also part of parent view). For example, this handler method will serialize fields annotated with @JsonView(View.Summary.class) and @JsonView(View.SummaryWithRecipients.class): public class View { interface Summary {} interface SummaryWithRecipients extends Summary {} } public class Message { @JsonView(View.Summary.class) private Long id; @JsonView(View.Summary.class) private LocalDate created; @JsonView(View.Summary.class) private String title; @JsonView(View.Summary.class) private User author; @JsonView(View.SummaryWithRecipients.class) private List<User> recipients; private String body; } @RestController public class MessageController { @Autowired private MessageService messageService; @JsonView(View.SummaryWithRecipients.class) @RequestMapping("/with-recipients") public List<Message> getAllMessagesWithRecipients() { return messageService.getAll(); } } JSON Views could also be specified when using RestTemplate HTTP client or MappingJackson2JsonView by wrapping the value to serialize in a MappingJacksonValue as shown in this code sample. JSONP As described in the reference documentation, you can enable JSONP for @ResponseBody and ResponseEntity methods by declaring an @ControllerAdvice bean that extends AbstractJsonpResponseBodyAdvice as shown below: @ControllerAdvice public class JsonpAdvice extends AbstractJsonpResponseBodyAdvice { public JsonpAdvice() { super("callback"); } } With such @ControllerAdvice bean registered, it will be possible to request the JSON webservice from another domain using a <script /> tag: <script type="application/javascript" src=""> </script> In this example, the received payload would be: parseResponse({ "id" : 1, "created" : "2014-11-14", ... }); JSONP is also supported and automatically enabled when using MappingJackson2JsonView with a request that has a query parameter named jsonp or callback. The JSONP query parameter name(s) could be customized through the jsonpParameterNames property. XML support Since 2.0 release, Jackson provides first class support for some other data formats than JSON. Spring Framework and Spring Boot provide builtin support for Jackson based XML serialization/deserialization. As soon as you include the jackson-dataformat-xml dependency to your project, it is automatically used instead of JAXB2. Using Jackson XML extension has several advantages over JAXB2: - Both Jackson and JAXB annotations are recognized - JSON View are supported, allowing you to build easily REST Webservices with the same filtered output for both XML and JSON data formats - No need to annotate your class with @XmlRootElement, each class serializable in JSON will serializable in XML You usually also want to make sure that the XML library in use is Woodstox since: - It is faster than Stax implementation provided with the JDK - It avoids some known issues like adding unnecessary namespace prefixes - Some features like pretty print don’t work without it In order to use it, simply add the latest woodstox-core-asl dependency available to your project. Customizing the Jackson ObjectMapper Prior to Spring Framework 4.1.1, Jackson HttpMessageConverters were using ObjectMapper default configuration. In order to provide a better and easily customizable default configuration, a new Jackson2ObjectMapperBuilder has been introduced. It is the JavaConfig equivalent of the well known Jackson2ObjectMapperFactoryBean used in XML configuration. Jackson2ObjectMapperBuilder provides a nice API to customize various Jackson settings while retaining Spring Framework provided default ones. It also allows to create ObjectMapper and XmlMapper instances based on the same configuration. Both Jackson2ObjectMapperBuilder and Jackson2ObjectMapperFactoryBean define a better Jackson default configuration. For example, the DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES property set to false, in order to allow deserialization of JSON objects with unmapped properties. These classes also allow you to register easily Jackson mixins, modules, serializers or even property naming strategy like PropertyNamingStrategy.CAMEL_CASE_TO_LOWER_CASE_WITH_UNDERSCORES if you want to have your userName java property translated to user_name in JSON. With Spring Boot As described in the Spring Boot reference documentation, there are various ways to customize the Jackson ObjectMapper. You can for example enable/disable Jackson features easily by adding properties like spring.jackson.serialization.indent_output=true to application.properties. As an alternative, Spring Boot also allows to customize the Jackson configuration (JSON and XML) used by Spring MVC HttpMessageConverters by declaring a Jackson2ObjectMapperBuilder @Bean: @Bean public Jackson2ObjectMapperBuilder jacksonBuilder() { Jackson2ObjectMapperBuilder b = new Jackson2ObjectMapperBuilder(); b.indentOutput(true).dateFormat(new SimpleDateFormat("yyyy-MM-dd")); return b; } This is useful if you want to use advanced Jackson configuration not exposed through regular configuration keys. If you just need to register an additional Jackson module, be aware that Spring Boot autodetects all Module @Bean. For example to register jackson-module-parameter-names: @Bean public Module parameterNamesModule() { return new ParameterNamesModule(JsonCreator.Mode.PROPERTIES); } Without Spring Boot In a plain Spring Framework application, you can also use Jackson2ObjectMapperBuilder to customize the XML and JSON HttpMessageConverters as shown bellow: @Configuration @EnableWebMvc public class WebConfiguration extends WebMvcConfigurerAdapter { @Override public void configureMessageConverters(List<HttpMessageConverter<?>> converters) { Jackson2ObjectMapperBuilder builder = new Jackson2ObjectMapperBuilder(); builder.indentOutput(true).dateFormat(new SimpleDateFormat("yyyy-MM-dd")); converters.add(new MappingJackson2HttpMessageConverter(builder.build())); converters.add(new MappingJackson2XmlHttpMessageConverter(builder.createXmlMapper(true).build())); } } Jackson modules Some well known Jackson modules are automatically registered if they are detected on the classpath: - jackson-datatype-jdk7: Java 7 types like java.nio.file.Path(as of 4.2.1 release) - jackson-datatype-joda: Joda-Time types - jackson-datatype-jsr310: Java 8 Date & Time API data types - jackson-datatype-jdk8: other Java 8 types like Optional(as of 4.2.0 release) Some other modules are not registered by default (mainly because they require additional configuration) so you will have to register them explicitly, for example with Jackson2ObjectMapperBuilder#modulesToInstall() or by declaring a Jackson Module @Bean if you are using Spring Boot: - jackson-module-parameter-names: adds support for accessing parameter names (feature added in Java 8) - jackson-datatype-money: javax.moneytypes (unofficial module) - jackson-datatype-hibernate: Hibernate specific types and properties (including lazy-loading aspects) Advanced features As of Spring Framework 4.1.3, thanks to the addition of a Spring context aware HandlerInstantiator (see SPR-10768 for more details), you are able to autowire Jackson handlers (serializers, deserializers, type and type id resolvers). This could allow you to build, for example, a custom deserializer that will replace a field containing only a reference in the JSON payload by the full Entity retrieved from the database.
https://spring.io/blog/2014/12/02/latest-jackson-integration-improvements-in-spring
CC-MAIN-2017-39
en
refinedweb
All postings are provided "As Is" with no warranties, and confers no rights. Opinions and views expressed here are not necessarily those of Microsoft Corporation. In a post to the WPF forum, Zhou Yong had the idea to use a MarkupExtension to make it possible to create a generic dictionary (Dictionary<K,V>) from Xaml. It’s a cool idea, so I played with it a bit, with the result shown below. The end result is that you can do the following, for example, where a ListBox is bound to a Collection<String>: <ListBox xmlns: <ListBox.ItemsSource> <generic:CollectionOfT <!-- Create a Collection<String> --> <sys:String>Hello</sys:String> <sys:String>World</sys:String> </generic:CollectionOfT> </ListBox.ItemsSource> </ListBox> … or the following, where a MyGenericType<string, int> is instantiated: <Generic TypeName="mytpes:MyGenericType"> <x:Type <x:Type </Generic> But First, Some Background on Generics support in Xaml For the most part, Xaml does not support generics. The one exception to that is that generics are supported on the root tag of your Xaml, if you’re compiling the Xaml. (Therefore, for example, it’s not supported if you’re loading Xaml directly into Internet Explorer.) Here’s an example of where a generic type is supported: <PageFunction x:Class="CSharp.MyPageFunction" xmlns:sys="clr-namespace:System;assembly=mscorlib" x:TypeArguments="sys:String" xmlns="" xmlns: <Grid> </Grid> </PageFunction> … where PageFunction is defined as: public class PageFunction<T> : PageFunctionBase Note that the x:TypeArguments attribute is how you specify type arguments to the generic type in Xaml. So that Xaml is equivalent to: public partial class MyPageFunction : PageFunction<String> { public MyPageFunction() { ... } ... } A Helper To Create Collection<T> in Xaml But working with Zhou’s approach of using markup extensions, here’s a markup extensions that helps with the case of a really common generic type: Collection<T> What we’ll end up with is a <CollectionOfT> markup extension. First, here’s a base class that provides some common functionality (we’ll be using this again later for List<T>, etc.): // // MarkupExtension that is base for an extension that creates // Collection<T>, List<T>, or Dictionary<T>. // (CollectionType is either IList or IDictionary). public abstract class CollectionOfTExtensionBase<CollectionType> : MarkupExtension where CollectionType : class public CollectionOfTExtensionBase(Type typeArgument) _typeArgument = typeArgument; // Default the collection to typeof(Object) public CollectionOfTExtensionBase() : this(typeof(Object)) // Items is the actual collection we'll return from ProvideValue. protected CollectionType _items; public CollectionType Items get { if (_items == null) { Type collectionType = GetCollectionType(TypeArgument); _items = Activator.CreateInstance(collectionType) as CollectionType; } return _items; } // TypeArgument is the "T" in e.g. Collection<T> private Type _typeArgument; public Type TypeArgument get { return _typeArgument; } set { _typeArgument = value; // If the TypeArgument doesn't get set until after // items have been added, we need to re-create items // to be the right type. if (_items != null) object oldItems = _items; _items = null; CopyItems(oldItems); // Default implementation of CopyItems that works for Collection/List // (but not Dictionary). protected virtual void CopyItems( object oldItems) IList oldItemsAsList = oldItems as IList; IList newItemsAsList = Items as IList; for (int i = 0; i < oldItemsAsList.Count; i++) newItemsAsList.Add(oldItemsAsList[i]); // Get the generic type, e.g. typeof(Collection<>), aka Collection`1. protected abstract Type GetCollectionType(Type typeArgument); // Provide the collection instance. public override object ProvideValue(IServiceProvider serviceProvider) return _items; With that base class in place, here’s the implementation of the CollectionOfT markup extension: // MarkupExtension that creates a Collection<T> [ContentProperty("Items")] public class CollectionOfTExtension : CollectionOfTExtensionBase<IList> protected override Type GetCollectionType(Type typeArgument) return typeof(Collection<>).MakeGenericType(typeArgument); And again, here’s that markup extension in action: <generic:CollectionOfT <!-- Create a Collection<String> --> <sys:String>Hello</sys:String> <sys:String>World</sys:String> </generic:CollectionOfT> A Helper to create List<T>, ObservableCollection<T> in Xaml Similarly, here are ListOfT and ObservableCollectionOfT markup extensions for creating List<T> and ObservableCollection<T>: // MarkupExtension that creates a List<T> public class ListOfTExtension : CollectionOfTExtensionBase<IList> protected override Type GetCollectionType( Type typeArgument ) return typeof(List<>).MakeGenericType(typeArgument); // MarkupExtension that creates an ObservableCollection<T> public class ObservableCollectionOfTExtension : CollectionOfTExtensionBase<IList> return typeof(ObservableCollection<>).MakeGenericType(typeArgument); A Helper to create Dictionary<T> in Xaml Dictionary<K,V> is all that’s left to do. But Dictionary is a bit more complicated, because it has two type arguments, one for the key type and one for the value type. And there I cheated a little; I only put in support for the value type argument, and left the key type as always Object. // MarkupExtension that creates an Dictionary<Object,T> // (Items cannot be the [ContentProperty]). public class DictionaryOfTExtension : CollectionOfTExtensionBase<IDictionary> return typeof(Dictionary<,>).MakeGenericType(typeof(Object), typeArgument); protected virtual void CopyItems(IDictionary oldItems) IDictionary oldItemsAsDictionary = oldItems as IDictionary; IDictionary newItemsAsDictionary = Items as IDictionary; foreach( DictionaryEntry entry in oldItemsAsDictionary ) newItemsAsDictionary[entry.Key] = oldItemsAsDictionary[entry.Key]; The other complication with Dictionary is that the Items property can’t be the [ContentProperty]. So when you use it in Xaml, you have to specify the explicit <DictionaryOfT.Items> tag. So usage ends up looking like: <generic:DictionaryOfT <!-- Dictionary<Object,String --> <generic:DictionaryOfT.Items> <sys:String x:Hello</sys:String> <sys:String x:World</sys:String> </generic:DictionaryOfT.Items> </generic:DictionaryOfT> A Helper to Create Other Generic Types Finally, this is a more general extension, that can create any generic type. For example, given this class: public class MyGenericClass<T1,T2> private T1 _prop1; public T1 Prop1 get { return _prop1; } set { _prop1 = value; } private T2 _prop2; public T2 Prop2 get { return _prop2; } set { _prop2 = value; } … and this “Generic” markup extension: // Markup extension that creates an object from a constructed generic type. [ContentProperty("TypeArguments")] public class GenericExtension : MarkupExtension // The collection of type arguments for the generic type private Collection<Type> _typeArguments = new Collection<Type>(); public Collection<Type> TypeArguments get { return _typeArguments; } // The generic type name (e.g. Dictionary, for the Dictionary<K,V> case) private string _typeName; public string TypeName get { return _typeName; } set { _typeName = value; } // Constructors public GenericExtension() public GenericExtension(string typeName ) TypeName = typeName; // ProvideValue, which returns an object instance of the constructed generic type IXamlTypeResolver xamlTypeResolver = serviceProvider.GetService(typeof(IXamlTypeResolver)) as IXamlTypeResolver; if (xamlTypeResolver == null) throw new Exception("The Generic markup extension requires an IXamlTypeResolver service provider"); // Get e.g. "Collection`1" type Type genericType = xamlTypeResolver.Resolve( _typeName + "`" + TypeArguments.Count.ToString() ); // Get an array of the type arguments Type[] typeArgumentArray = new Type[ TypeArguments.Count ]; TypeArguments.CopyTo(typeArgumentArray, 0); // Create the conrete type, e.g. Collection<String> Type constructedType = genericType.MakeGenericType(typeArgumentArray); // Create an instance of that type return Activator.CreateInstance(constructedType); ... you can create an instance of MyGenericClass, such as MyGenericClass<string,int>, from Xaml with something like: <generic:Generic </generic:Generic> Published Friday, October 06, 2006 9:04 AM by MikeHillberg I've made a couple of updates to this since I first posted it. One was a typo in the CollectionOfT type, the other was to add the GenericExtension in the last section of the post. MikeHillberg Hey Mike, I have another question, it seems xaml parser doesn't recognize ParamArrayAttribute, you know, when I define my custom markup extension's constructor this way: public GenericTypeExtension(String typeName, params Type[] typeArguments) //... And I use it in xaml this way: <Label Content="{co:Dictionary, sys:String, sys:Int32}"/> When I do this, I get an exception saying that GenericTypeExtension doesn't have a constructor which accepts three arguments, which means that xaml parser don't treat params the way we anticipate, is this for true? is there any workaround to this problem? or actually the current implementation of WPF prevents me from doing this, if so, I wish this feature can be added in the next version of WPF. Sheva Sheva As an aside, I want to point out a minor mistake you made in your code, you wrote: // Create the conrete type, e.g. Collection<String> Type concreteType = genericType.MakeGenericType(typeArgumentArray); actually I think when the generic types are fed with type arguments, they actually becomes constructed types, not concrete types, you know, when you talking about concret types, I always think about its contrary aka abstract types:) Regarding parameter arrays, you're right that they aren't supported for markup extensions today, but I'll put it on the wish list for the next version. Regarding "concrete" vs "constructed", you're right. I updated the text above. Thank you for the comments! Mike, First, I would just like to say thanks for the useful and insightful code. It has been quite useful for implementing support of generics in my use of XAML. I have a question regarding how this could be used with the XAMLWriter. I believe I must use a TypeConverter to convert the property from a List<T> to a ListOfTExtension. I have attempted to do so by applying the the TypeConverterAttribute to the property I wish to convert. Alas, the only conversion that the type converter is asked to perform is to a System.String and not to a MarkupExtension. I performed some preliminary investigation and found that if I apply the TypeConverterAttribute to the generic class definition instead of the property in the class then a conversion to a MarkupExtension is requested. Unfortunately I cannot apply the TypeConverterAttribute to the List<T> class since it is defined in the framework. As I mentioned I have only performed some preliminary investigation into this issue and will perform a more in depth search over the next couple of days. I was just interested if you had any insight into this issue or if I am missing something very simple (which I probably am). Thanks in advance for any help or guidance you are able to provide, Cheers, Trevor tkmcclean Hi Trevor, yes, I see this too. The one workaround I can think of is to not generate a List<T>, but a subclass that just exists to set the [TypeConverter]. Something like: [TypeConverter(typeof(MyListConverter))] public class MyList : List public MyList() { } Hi Mike, Could you please provide with small example on how to use this with XamlWriter ? How TypeConverters can be used to serialize generic classes ? Thanks in advance, Zohrab. Zohrab Cool ! We can instantiate generic class MyGenericClass<T1,T2>, but how we can set values for Prop1 and Prop2 of MyGenericClass<T1,T2> in XAML ? Thanks, This is exactly what I'm looking for; thank you. I'm getting a compile error in the XAML trying to use the DictionaryOfT sample: "The attachable property 'Items' was not found in type 'DictionaryOfT'." Do you have a working sample, or advice on what I'm doing wrong? Carl. Zodman
http://blogs.msdn.com/mikehillberg/archive/2006/10/06/LimitedGenericsSupportInXaml.aspx
crawl-002
en
refinedweb
. Some quick links about LINQ: Articles about extension methods by the Visual Basic team Third-party LINQ Pingback from There are several good new blogs from members of the Microsoft C# team. Nevertheless, the most important Hi, I have downloaded the code from your article as I thought it looked a perfect solution for something I needed to do. Unfortunately, although my code builds and the intellisense works, I get an error when attempting a foreach iteration: "Binding Error: Member 'Section.Questions' is not a mapped member of 'Section' (in my code, Section and Question are the equivalent of Order and Product in yours). I also have the problem that after referencing Section.Questions in code, I often find Visual Studio crashes! I have had to write code in Notepad and paste it in. I am using Visual Studio 2008 Beta 2. Any thoughts much appreciated. Hi Neal, What UI technology are you using ? (web, winforms, wpf). Could you send me Section and Question classes by email ? (including partial definition adding .Questions property) Mitsu I'm having the same issue as Neal. When ever I reference a collection of type listSelector Visual Studio 2008 Beta crashes. It happens when I try and get intellisence from the collection to get a property such as 'count' or a methos like 'Add' from the collection. Pasting in from notepad only sometimes works for me. Im using ASP.NET on Windows Vista. The ListSelector property is sitting in a partial class. Any ideas? Hi Dav, Could you send me a sample at mitsufu@microsoft.com ? Thanks, Mitsu, I am trying to use your excellent code but I am using 2008 beta2 and apparently, it is very broken in beta2. Do perhapshave a version that will work with beta2 that you can share with us? :) Thank you for sharing your knowledge! :) Joel Specifically, when I try to run the sample code, I get Error "The type or namespace name 'TableAttribute' does not exist in the namespace 'System.Data.Linq' (are you missing an assembly reference?) C:\...\ListSelector\ListSelector\Northwind.designer.cs 715 31 ListSelector Do you have a thought about why this might be happening? I think some things have changed in VS2008 latest builds (RC). TableAttribute is now available in System.Data.Linq.Mapping namespace. The simplest way is to recreate the dbml file, dropping the same tables. I will provide an updated source when VS gets RTM. A few weeks to wait... Hi! I bet you've got a copy of VS 2008 RTM along with .NET 3.5. Any chance you've aligned the code in this blog entry with the final release yet? Thanks in advance! Just a short post to tell that I have replaced the source code with the VS2008 RTM version : Just a short post to tell that I have replaced the source code with the VS2008 RTM version : Now that ScottGu blogged about it , we have received a number of great feedback and questions.  Thanks for this. It worked like a charm, and has made my life not-insignificantly easier. Hi Mitsu, Thanks for the post, it has been very useful. I have one question: If you remove a Product from an Order, how can you ensure that the corresponding Order_Detail is also removed ? Thanks ! Mihai Added this to a list of LINQ TO SQL Tutorials, Articles and Opinions Yes I am also interested in how to extend this for functionality to add or remove items from the relationship. It is very nice for read-only collection, but that's quite limited. Thanks. I am working on a solution for adding add/remove support. I hope to publish it quickly. Your excellent extension is not terribly useful in a real application without the corresponding support for add/remove. I can imagine a couple of ways to attack this, but I'm sure your solution will be more optimal. Any idea on when you might publish that? Thanks. This is a critically important bridge until the EF gets here. The ListSelector is a great idea. It is the best M:M solution I found so far. However, its inability to handle inserts and deletes is a big problem for me. Have you found any way to solve this yet? I have a poor man's insert that seems to work, although I haven't used it too much other than some basic testing. (I tried several options to delete, and couldn't get anything to work - always get null key errors, even working from both directions in coordination). Below is a basic insert that seems to work. Just call the method on the "parent" or containing class, passing in the contained object, and then call update on the context. (See the Add...() method below) public partial class Case { // .... // Many-to-Many wrapping // Many thanks to Mitsu of MS, see his blog for List Selector () private ListSelector<CaseEvidence, Evidence> evidence = null; public ListSelector<CaseEvidence, Evidence> Evidence { get { if (evidence == null) evidence = this.CaseEvidences.AsListSelector(ce => ce.Evidence); return evidence; } } public void AddEvidence(Evidence e) CaseEvidence ce = new CaseEvidence(); ce.Evidence = e; ce.Case = this; this.CaseEvidences.Add(ce); // ..... } Ok, Here is a possible solution for add/remove support: I think it's extensible enough to answer to many scenarios. n:m Beziehungen kommen in den meisten großen Datenbanken vor. Doch wie kann diese Beziehung mit LINQ-To-SQL just want to ask why if we make a relation database we cant add to the table as we see here in the picture we saw the 3 tables is fixed which mean no more rows can be added how it can be fix Mitsu Furata explores many-to-many relationships in LINQ to SQL in these two posts: How to implement Sorry but I´m misunderstanding something or the Order / Order Detail and Product is not a many to many relationship at all? There are two one to many relations but no many to many... A good example would be one where the object in the middle of both many sides should not be modeled in object oriented programming. A good one... maybe a jobPost and a tag where one job are related with a collection of tags and also a tag is related with the collection of jobs that contains that tag. In my previous post (), I had proposed a simple solution for implementing many-to-many relationships using Linq to Sql. In this article, I will show one possible solution to implement many-to-many relationship using Linq to Sql. Let's begin with some definitions and what Linq to Sql offers. A “many to many” relationship between two entities defines a kind of bi-directiona Thanks for this. This helped me a lot. Very nice stuff, much obliged. A few hours of tinkering to adapt to my database schema, and my second DataGridView started showing data pulled using your mechanism. Once I've got that foot in the door I'm happy. I'm not understanding what exactly the ListSelector class offers that couldn't be accomplished by implementing the Order class as follows: <pre>public partial class Order { private IList<Product> products = null; public IList<Product> Products { get { if (products == null) products = Order_Details.Select(detail => detail.Product).ToList().AsReadOnly(); return products; } } }</pre> That would address the "we are losing the direct access to the elements that we had with OrderDetails[i] and many other features (add/remove, notifications, etc)" issue, wouldn't it? Is it that, unlike returning a List<> object, a ListSelector offers delayed loading/deferred execution? Although, I suppose even then the difference isn't that great because my implementation at least uses lazy initialization; it's not like the contents of the Products property will be retrieved when the Order is instantiated/loaded. @BACON: Visually you will get the same but it's unusable. You will create a new collection at each time you will access the property !!! call order.Products[0] then order.Product[1] and you will get two products belonging to two different collections... The goal of the ListSelector class is to create a proxy over a unique collection, changing the item accessor. This allows not to recreate any extra collection while changing the element type of the resulting list. I tried to use this pattern, but it makes Linq generate highly unusual SQL. The problem boils down to this: var order = db.Orders.First(); foreach (var product in order.Order_Details.Select(x => x.Product)) { //... I would expect this to generate two SQL statements, one to find the order, and the an inner join between Order_Details and Product. However, when done like the above Linq to SQL generates one statement to find the order, on to find all entries in Order_Details, and then one statement FOR EACH matching row in Products. I am unable to understand why this is happening, but I can clearly see it if I output the generated queries by using the datacontext's Log property and the debug console. Strangely enough, if I do it like this: var q = db.Orders.Where(o => o.OrderID == 10248); var order = q.First(); var products = q.SelectMany(o => o.Order_Details, (o, d) => d.Product); foreach (var product in products) { // ... I get the expected SQL. Can you shed any light on why I am getting this strange behaivor? -- Sincerely Anders Hi Anders, I will try to explain. The SelectMany() is equivalent to: from od in order.Order_Details from p in od.Product select p; using this syntax, we are building a single query that will be analyzed by the Linq to Sql engine to generate a sql join. if you are writing: var product in order.Order_Details.Select() the "order.Order_Details" is not part of the Linq to Sql expression. It's a classical Linq to object syntax that will raise the lazy loading system. I don't know what you really want to query but you could even make just one query with something like: from o in db.Orders.Where(o => o.OrderID == 10248).Take(1) Take(1) returns an enumeration of 1 element which is different from First() which is returning the element itself >> no deferred execution. Hi Mitsu,and thanks for the explanation! But perhaps I should try to explain a bit better. What I am trying to do is, to define a property on an object that can be used to iterate over a many-to-many relation, like the pattern you are describing in the post. With Northwind as an example, I am trying to define a property on an Order that returns a list of the products in it, basically like the one you have defined above: Public IEnumerable<Product> Products return Order_Details.Select(od => od.Product); However, as I described in my first post this particular statement doesn't generate the SQL I would expect. I think I can follow your explanation, but I am unsure how to procede from here. Is the only way to have Linq generate inner join SQL statements to start fromv the datacontext each time? If so, the choice is simple - either use one global datacontext or a lot of locals and a lot of messing with the attach method. Neither options seem super apealing to me. But just to make sure, if I have an Order object from somewhere (fx. First()) the only way to define a property on it that generates a proper inner join sql statement is something like this: public IEnumerable<Product> Products2 get { var dc = new NorthwindDataContext(); return dc.Order_Details .Where(x => x.OrderID == this.OrderID) .Select(x => x.Product); } I just seems strange to me that the EntitySet doesn't understand to do this by itself, but maybe thats just me... I see, We try not to mix the model and the way we are loading data. For a single model you could have different ways to load data depending where you are in your application. You can just solve the model you want by creating properties and playing with EntitySets and then use LoadOptions to define how you want the data to be retrieved. (see DataContext.LoadOptions and DeferredLoadingEnabled) If you would like to receive an email when updates are made to this post, please register here RSS Trademarks | Privacy Statement
http://blogs.msdn.com/mitsu/archive/2007/06/21/how-to-implement-a-many-to-many-relationship-using-linq-to-sql.aspx
crawl-002
en
refinedweb
Base class for plugins that wish to enable the insertion of custom HTML content into posts. public class ContentSource : WriterPlugin CreateContent Create content using an Insert dialog box. Plugin classes that override this method must also be declared with the InsertableContentSourceAttribute class. CreateContentFromLiveClipboard Create content using the contents of a LiveClipboard XML document. CreateContentFromUrl Create content based on a URL. The source of content to be inserted can any or all of the following: an Insert dialog box, a URL, or Live Clipboard data. Implementers of this class should override the CreateContent method(s) corresponding to the content-sources they wish to support. Note also that each of the CreateContent methods has a corresponding class-level attribute that must be specified along with the override. There is a single instance of a given ContentSource created for each Windows Live Writer process. The implementation of ContentSource objects must therefore be stateless (the context required to carry out the responsibilities of the various methods are passed as parameters to the respective methods). Inherits from: System.Object WindowsLive.Writer.Api.WriterPlugin WindowsLive.Writer.Api.ContentSource Assembly: windowslive.writer.api.dll WindowsLive.Writer.Api
http://msdn.microsoft.com/en-us/library/aa702823.aspx
crawl-002
en
refinedweb
. If you would like to receive an email when updates are made to this post, please register here RSS Instead of re-hashing information I've found elsewhere I figured a pre-reqs post would be good. One of A very neatly explained example. Great work!!! Hi, I have tryied this example.Iam able to gat the data from the excel but unable to write the data into the excel by using "SetCellA1" property. anyone can help on this. Thanks&Regards, Amar... What error are you getting? Can you share the code you are running? Hi Shahar, Iam not getting any error.i folowed the same steps which u have given. this is the code which iam writing data into the excel which is in Sharepoint server 2007 using excel webservices. Code: --------- Private Sub Button1_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button1.Click Try Dim mystat() As MyExcelWebReference.Status Dim targetpath As String Using es As New MyExcelWebReference.ExcelService() targetpath = "" es.PreAuthenticate = True es.Credentials = System.Net.CredentialCache.DefaultCredentials sessionid = es.OpenWorkbook(targetpath, String.Empty, String.Empty, status) If (sessionid = "") Then MsgBox("Error in Opening Workbook") Else es.SetCellA1(sessionid, "Sheet1", "A1", "Amar") End If es.CloseWorkbook(sessionid) End Using Catch ex As Exception MsgBox(ex.Message) End Try End Sub End Code Please look into this and give me the code sample. Oh. Are you expecting to see the value change inside the Excel workbook inside sharepoint? Thanks for the fast response... yes i want to write some data into the excel located in the share point server 2007. can you give me the points. Amar.... Ezcel Services is not an authoring environment. It was not meant in this version to "Save Back" to the SP server. It's still possible, but requires a few more steps. 1. Call the GetWorkbook() method on the proxy to get back a byte array represnting your workbook. 2. Use the SharePoint OM to save that byte array back to the server. Thanks Shahar, please can you give me the sample code steps for this. once again thanks for the response. Regards, I can, but it may actually take me a little while to do this. Don't have time for posting articles right now and while it's not complex, it's a fair amount of code. Until I get to do it, I suggest you search for information on how to use the SPFile.SaveBinary() method and take a look at the GetWorkbook() method of Excel Services. s Thanks for the info.. I will check the details... Hi All, I need some syntax to know the Active excelsheet name which i have opened using excel webservices. please give me the code samples for that. i want to see the active sheet neme when i open the excel in "view in browser" option in sharepoint server. Can you elaborate on "i want to see the active sheet neme when i open the excel in "view in browser" option in sharepoint server"? Not sure I undestand what you mean. I have published my excel sheet into the sharepoint server 2007. in that sheet in one cell i have placed one formula(i.e udf function) to get the activesheet name. now when i want to go and see the excel in "view in browser" option in Sharepoint server 2007t then i want to get the active sheet name in the given cell whare i have given the formaula. please help me out on this. Amar... What is the UDF that returns the ActiveSheet name exactly? I wasn't aware that we supported something like that. (Which, of course, does not mean much, since there are many Excel functions I am not aware of. :)) I need one excel inbuilt funtion to get the Activesheet name. that only i want to write in the UDF to get the Active sheet name. Amar.. There is no such feature. Can you explain what you need it for? It may help us in a future release. Thanks... I used the information in this blog to work with the excel services API within MOSS 2007 and was able to do a variety of activities with the exposed web service. I am trying to access the same web service from a RTM Project Server website. The website exposes the particular webservice but I cannot add the reference itself. The Add Reference button is disabled and I get the following error messages: << The document at the url was not recognized as a known document type. The error message from each known type may help you fix the problem: - Report from 'DISCO Document' is 'Root element is missing.'. - Report from 'WSDL Document' is 'There is an error in XML document (0, 0).'. - Root element is missing. - Report from 'XML Schema' is 'Root element is missing.'. >> Is this an issue specific to Project Server RTM? I then tried to access the excelservice webservice(the wsdl) of a RTM MOSS 2007 website and got the following error:)). When I tried to add this webservice it allowed me but w/o the wsdl document. I've never faced such problems with the Beta2 versions of MOSS. I haven't found any info on these issues elsewhere and so any help would be appreciated. How does a workflow work in Excel services?Basically i would like to know how the excel workbooks published on to the server behave when a workflow is started.Will the documents be visible to particular user only after it is approved by the initiatir or some other approver? Thanks, Vandana Anirudh: As for the first question: Excel Services is only enabled on the Enterprise version of MOSS. As far as I know, Project server is a different SKU that does not support it. For more help on that, you can try accessing the forums: As for the second question: What is it that you are doing exactly when you say "I then tried to access the excelservice webservice of a RTM MOSS 2007 website"? Where do you get that COM error you describe? Vandana: It all depends on what workflow you apply to the document library in question. There is no inherent process that happens for Workbooks on the server. Thanks for the reply.I need bit more clarification on workflows. I have used approval workflow which routes the documenst thru all the workflow participants. whai i feel is once a document is approved then only it should go to next level for further processing. but i find no such restriction in excel services.Is there any locking mechanism either thru custom workfkows or with predefined workflows wherein a workbook is viewed only if it approved by a particular user? First off thanks for the prompt reply! :) Well its a disappointment if exceservices is not supported for project server, because even the project web access websites created under Project Server offer that excelservice webservice,i.e. you can see it when you try to add the web reference. However it doesn't let you add this reference(button disabled) and the errors are printed in the web service despcrription txtbox. After that I tried to access a webservice of a MOSS RTM site (on the same machine) and got the COM error.I got a workaround for that(it was a site specific error) That was not the main issue; I really needed the excelserviceAPI for Project server. Are you sure it is not supported? :( Anirudh, I am double-checking. But I think Excel Services comes only as part of the enterprise version. Vandana, I sent an email in the internal alias to see if something comes up. Both: Consider using the Excel Services forums () for questions such as this - you will get more eyes on them. Anirudh, Excel Services only comes as part of the MOSS for Internet Sites or the Enterprise packages. For more info, please follow this link: Hi All, I have created a approval workflow and i have added approvers name as administrator,spuser1 and i have assighned the work flow to the test document.after that i have loged in to the sharepoint server as another user.when i want to open the test document without approving the document logedin user able to open and edit the document but it should display some type of message like "it is not yet approved by the administrator workflow is in process". is there any locking is available for the document when workflow is processing. please anybody can give the flow of the work flow and approval steps and workflow conifiguration. regards, I want to know how to auto schdule my excel files in sharepoint server 2007.is there any provision given by sharepoint server 2007 to schdule the files and also programatically how can set the schduler.In sharepoint there is one dll i.e Microsoft.SharePoint.SPSchedule how can we work on this iam not clear.can anybody can send sample code and help on this. Can you explain what "Auto Scheduling" means? What's the expected behavior? What do you want them to auto schecule to? Hi Sahar, While opening the workbook I am getting "The file that you selected could not be found. Check the spelling of the file name and verify that the location is correct" error, but the path of the excel that I am giving is correct. How can I resolve this. Can we refresh the document that is stored in Sharepoint under Excel trusted Location by calling Refresh method that is available with Excel services workbook? Is this possible? I am getting the Error while opening the workbook, error is "You do not have permissions to open this file on Excel Services." The excel file that I am opening is in trusted location in sharepoint. Do we need to give any permissions to that document in sharepoint document library. How do I can resolve this? Here is the full code ExcelService currentService = new ExcelService(); string targetPath = ""; Status[] outStatus; currentService.Credentials = System.Net.CredentialCache.DefaultCredentials; string sessionId = currentService.OpenWorkbook(targetPath, "en-US", "en-US", out outStatus); Thanks in Advance, Sasya Can you choose "View in WebBrowser" in the drop down on the file? View in WebBrowser" in the drop down on the file is working from Sharepoint document library. But I am getting the error while accessing using Excel web services. How do I resolve this error? The excel sheet that I am trying to access from Excel Service also has Data Connections, which are used to pull data from database(SQL server). Thanks I am using Excel services for refreshing the excel sheet and then trying to save refreshed document to the Sharepoint library.Below is the thing that I am trying out: 1. Opening the workbook from Sharepoint trusted location. 2. Refreshing the excel report(Has Pivot Tables and Pivot Charts). 3. Writing refreshed data to an external file. Here is the code: ExcelService currentService = new ExcelService(); string targetPath = ""; string sessionId = currentService.OpenWorkbook(targetPath, "en-US", "en-US", out outStatus); currentService.Refresh(sessionId, "Employees"); byte[] contents = currentService.GetWorkbook(sessionId, WorkbookType.FullWorkbook, out outStatus); using (BinaryWriter binWriter = new BinaryWriter(File.Open(@"C:\Test.xlsx", FileMode.Create))) { binWriter.Write(contents); } currentService.CloseWorkbook(sessionId); I have modified my data source by adding new records, But even after the refresh also the excel report is not getting refreshed. When I check Text.xls it is as same as the Source excel and not updated with the latest data from data source. When I refreh the Connection from Excel the sheet is getting refreshed with the latest data from my datasource. Is there anything I am missing here? Any help will be appreciated. Sasya: RE: Failure of EWA What's the error you are saying? RE: Api question. Hmm. That's very strange. 1. What does the connection shunt information to? (PivotTable?) 2. What happens if you call .RefreshAll() instead of .Refresh()? Thanks for the response, ConnectionString Info: Provider=SQLOLEDB.1;Integrated Security=SSPI;Persist Security Info=True;Initial Catalog=SolutionMetaData;Data Source=xxxx;Use Procedure for Prepare=1;Auto Translate=True;Packet Size=4096;Workstation ID=Test;Use Encryption for Data=False;Tag with column collation when possible=False CommandType:Table CommandText:DataBase.TableName(Sample.Employees) From C# Code for Excel Services: If I call .Refresh by passing in second parameter as empty then it will refresh all the connections(Source:MSDN) I have tried the one below, but still it doesn't refresh. currentService.Refresh(sessionId, ""); If I am refreshing the report from Excel client by using Refresh or RefreshAll from Connections Tab I am able to see the updated report in excel. But which is not happening from Excel web services. I have a problem opening a workbook from a remote machine using Excel web services. I am getting the following error: URL authorization failed for the request I am using the default credentials for Excel web services in code. from X machine I am calling openworkbok for the url on machine MySite URL passed in as parameter for OpenWorkBook is : Thanks in advance SL SL: Can you navigate to ? If I try to access the url I am getting the error from machine x as well as machine Mysite, and Just it is displaying that an Error has occurred. In eventViewer I didnt see any message related to that. But on Mysite I can view without any error in WebBrowser by clicking View in WebBrowser link in the dropdown in Document library. But If I am accessing ExcelServices.asmx on MySite machine with, I am able to access it without any error. And in IIS for Office Server Web Services is configured with both Annonymous and Windows authentication. MySite/_vti_bin/ExcelServices.asmx is also configured with both Annonymous and Windows authentication. Sahara, I have strucked with the Data refresh problem using Excel services using C# code. It is not working as documented in MSDN if my document has data connections to external data sources. Sasya, If you choose the "Refresh All Connections" from the EWA toolbar does it work? (So, load the file in EWA and select the "Refresh ALL" and see if you get the new information you expect). SL, Excel Services API isn't accessible to anonymuos users, so you either must authenticate by supplying a network credentials (e.g. CredentialsCache.DefaultCredentials), or if you'd like you can modify the permissions of the anonymous user so that it has the "UseRemoteApi" permission. You can alter the permissions of the anonymous user from OM, or you can grant anonymous users full access to websites. I hope this helps. Hi Levin, Here is the code that I am using: ExcelService currentService = new ExcelService(); string targetPath = ""; currentService.Refresh(sessionId, ""); In my code I am using CredentialsCache.DefaultCredentials, but still OpenWorkBook is throwing out an error. And annonymous access in IIS is configured with Service account. And I have tried running this code from the machine MySite, I am still getting the error. Hi Anirudh, if you have created a site collection then use the url reference as: http://<server>/sites/<sitename>/_vti_bin/excelservice.asmx Regards Majeti I have deployed a web application which uses excel web service on a sharepoint site. I have some named items on the excel sheet. How can i programatically get the list of all the named items/parameters defined in there. Sush You cannot. The only workarounds you have available (afaik) are: 1. Know the names before hand. 2. Place the names in a hidden sheet and have a well known range name for the range that contains all the named ranges. Then use that to get the names. Hi there, My project/app is called Webexcel1. I added a webreference called ES. But it doesn't recognize the namespace ES. I tried it with the first using and the second one. ... //using Webexcel1.ES; using System.Web.Services.Protocols; excelService.Credentials = System.Net.CredentialCache.DefaultCredentials; Thanks! Frank Already solved my problem. It was due to a lack of knowledge in VS. I had the using declarations in the wrong spot of my project. Basics.... Works fine now. Thanks and good luck. Thanks for nice article. But i am getting an Soap Exception error "An error has occured" message. in foolowing line string test = es.OpenWorkbook(targetWorkbookPath, "en-US", "en-US", out outStatus); can anyone help me why i am getting this exception and how i can solved it. Thanks in advance. hi! my servlet return excel file i am getting excel file for java script request,but let me know how do open or display excel file through ajax responseText regards Thana That's not something you can do today. The only thing you can do is use AJAX to get data from Excel Services. is it necessary to install office 2007 and devloper's machine or they can access from share point server. if Office 2003 is present on the local system what are the minimum requirements for developer machine if share point server machine has share point 2007 and excel 2007 You dont need anything installed on the Developer machine - just a connection to the Excel Services SharePoint. I'm running into a soap exception error that a few of you mentioned previously. I get a "You do not have permissions to open this file on Excel Servcies" error when on the line of code where the ExcelService.OpenWorkbook function is called. I'm passing into the function the default credentials and a valid location. I've added the location of the xlsx workbook as a trusted WSS file location but i still get this error. I'd like to know if there are any possible explanations for this error. Thanks in advance guys. Can you open the file via EWA? When you run into issues like that, that's the first thing you should check - that will tell you whether the issue is with the server or with your software. I can open it via EWA. Sorry bout leaving that part out before. I'm thinking it might have something to do with either the default credentials I'm passing into the function. Or that it has to do with the workbook location. The workbook is located in a content database path (). Can i set security on Sheet level in a published workbook in Excel Services? My intension to publish a workbook with four sheets( i.e. user1,user2,user3,Admin). Every user have only access to his/her sheet only. But Admin can view all sheets as well as by using excel built-in formulas he can go for some calculations by taking input from some/all other user sheets information. Does it possible in Excel Services? Jason: If your user can open it in a browser, they should be able to open it through the API. What type of application is this? Phaneendra: We do not support this level of granularity with security - sorry. I am facing a similar kind of issue regarding SOAP exception at OpenWorkBook method. i am able to open the sheet in browser, i.e. it is working fine in EWA, but not coming up using web service. I have checked that the workbook is stored at the trusted location. also I have checked the permission of the user. It's not working even by providing the Full Control. What else could be the possible reason for this SOAP Exception? Can you paste the skeleton of your code (everything relevant up to and including the OpenWorkbook call)? Here is that snippet of code: private void button1_Click(object sender, EventArgs e) ExcelService proxyService = new ExcelService(); proxyService.SoapVersion = SoapProtocolVersion.Soap12; proxyService.Credentials = System.Net.CredentialCache.DefaultCredentials; Status[] status = null; string SessionId = null; string pathWorkbook = ""; SessionId = proxyService.OpenWorkbook(pathWorkbook, String.Empty, String.Empty, out status); status = proxyService.SetCellA1(SessionId, "Calculator", "Loan", textBox1.Text); status = proxyService.SetCellA1(SessionId, "Calculator", "Rate", textBox2.Text); status = proxyService.SetCellA1(SessionId, "Calculator", "Years", textBox3.Text); status = proxyService.CalculateWorkbook(SessionId, CalculateType.CalculateFull); object result = null; result = proxyService.GetCellA1(SessionId, "Calculator", "Payment", true, out status); if (result != null) textBox4.Text = result.ToString(); proxyService.CloseWorkbook(SessionId); Its just a small application for calculation. SessionId = proxyService.OpenWorkbook(pathWorkbook, String.Empty, String.Empty, out status); this line of code is throwing the SOAP exception. hey guys i have s problem........ While using UDF's i get the return values in a data table. Now i wannt to show this entire table in the excle sheet as it is. How can I do that? Can you further explain what the issue is? what do you mean by Data table? What tdo you mean "as it is"? The OpenWorkbook() is throwing the following exception: "Excel Web Services could not determine the Windows SharePoint Services site context of the calling process" Please note: 1> I'm using Excel Web Reference not static binding 2> The excel file is in trusted location. 3> I'm able to browse the excel file using IE. Please let me know what colud be the problem?
http://blogs.msdn.com/cumgranosalis/archive/2006/03/24/ExcelServicesHelloWorld.aspx
crawl-002
en
refinedweb
Provides a lightweight control for displaying small amounts of flow content. <ContentPropertyAttribute("Inlines")> _ <LocalizabilityAttribute(LocalizationCategory.Text)> _ Public Class TextBlock _ Inherits FrameworkElement _ Implements IContentHost, IAddChild, IServiceProvider Dim instance As TextBlock [ContentPropertyAttribute("Inlines")] [LocalizabilityAttribute(LocalizationCategory.Text)] public class TextBlock : FrameworkElement, IContentHost, IAddChild, IServiceProvider [ContentPropertyAttribute(L"Inlines")] [LocalizabilityAttribute(LocalizationCategory::Text)] public ref class TextBlock : public FrameworkElement, IContentHost, IAddChild, IServiceProvider public class TextBlock extends FrameworkElement implements IContentHost, IAddChild, IServiceProvider <TextBlock> Inlines </TextBlock> Content Model: TextBlock supports the hosting and display of Inline flow content elements. Supported elements include AnchoredBlock, Bold, Hyperlink, InlineUIContainer, Italic, LineBreak, Run, Span, and Underline. See TextBlock Content Model Overview for more information. and vertically aligning text within a TextBlock is done with the HorizontalContentAlignment and VerticalContentAlignment properties. 7, Windows Vista, Windows XP SP2, Windows Server 2008 R2, Windows Server 2008, Windows Server 2003
http://msdn.microsoft.com/en-us/library/system.windows.controls.textblock.aspx
crawl-002
en
refinedweb
Thoughts and Discussion about Software Development and Technology Web Widgets with .Net : Part One This two part article is based on a presentation I gave at Tech Ed North America, Developers Week, 2008, about using .Net to create Web Widgets for the MSDN and TechNet sites. Part one will cover the creation of a basic widget, written in C# and JavaScript. Part two will update that widget to be more dynamic using C#, LINQ, JavaScript, JSON (JavaScript Simple Object Notation), and CSS. Web Widgets : A Simple Definition The best place to start is to give you a definition of Web Widgets. From Wikipedia.org, the following is a good description: “(a web widget) is a portable chunk of code that can be installed and executed within any separate HTML-based web page by an end user without requiring additional compilation.” The basic idea is that a consumer site only needs to deploy a simple script block which requires no compilation. We achieve this by setting the script block’s source (src) parameter to a Uri on a separate widget host server. All server side processing is done on the host server, which sends back client-side script to render the widget on the consumer. There are probably hundreds of possible ways to build a Web Widget, but I’m going to walk through one specific example using .Net. Typically, Web Widgets can be consumed on any domain outside the host domain. However, Browser Cookie data is available in the Request if in the widget is deployed within the same domain as it is hosted. Additionally, Request information such as Server Variables and QueryString are available in any domain. Let’s Write Some Code : A Basic Widget using an HttpHandler Now that we’ve covered some background about what we are building, let’s get to work. As I mentioned before, we are going to be using C# and JavaScript to build this. You could really use any .Net language for the Middle-Tier, but I found that the syntactical similarities between JavaScript and C# made switching gears between the two tiers a bit less jarring. The code for this first example can be downloaded here: Step 1 : Create an HttpHandler Base Class for all widgets to use. First, let’s create a base class that can be used for multiple widgets. It will handle the basic functionality of an HttpHandler. Note: I chose to use an HttpHandler here in order to avoid the overhead of the ASP.Net Lifecycle. There is no need in this pattern to use Viewstate, Page Events, or Postbacks, so I can avoid all the unnecessary overhead by using an HttpHandler. 1) Create a new Visual Studio C# Web Project and call it “Demo1” 2) Add a new directory called Handlers and add a WidgetBase.cs class file inside this folder. Mark that class as abstract since it will be our base class. 3) Implement the IHttpHandler interface 4) Using the VS2008 IntelliSense context menu, stub out the required members for an HttpHandler This will add the following interface members… IsReusable - property to let IIS know whether this handler can be re-used by other threads. ProcessRequest(HttpContext context)- The main method of a handler. This is where we will do the work to write a response to the browser. 5) Set IsReusable to return false to ensure thread safety – We don’t want any other requests to come in and modify variables 6) Add null reference checks to ProcessRequest, to avoid null reference exceptions if (context != null && context.Request != null && context.Response != null) { } 7) Add an abstract BuildOutput() method that returns a string. We want to force inheritors to use this method. public abstract string BuildOutput(); 8) Add a member variable to hold a reference to the HttpContext object that is passed into ProcessRequest private HttpContext _context; 9) Response.Write the results of BuildOutput in ProcessRequest, using the Response object in the _context member variable just added in step 8. _context = context; _context.Response.Write(BuildOutput()); _context = context; _context.Response.Write(BuildOutput()); 10) Add Request and Response shortcuts, using _context member variable. We will be using these shortcuts later, in the widgets that use this base class. /// <summary> /// Shortcut to Request object /// </summary> public HttpResponse Response get { return _context.Response; } } public HttpRequest Request get { return _context.Request; } That is all the work we need to do in WidgetBase for now. Let’s move on to create a object of type WidgetBase. Step 2 : Create an EventsWidget class and wire up the handler in IIS 1) Add a new class file called EventsWidget.cs to the Handlers directory 2) Implement the WidgetBase class and stub out the BuildOutput method using the handy VS2008 Intellisense dropdown 3) Create and return a string called “output” in the BuildOutput method and initialize it to the common “Hello World!” value, since this is the first time we’ll run this application. public override string BuildOutput() string output = "Hello World!"; return output; 4) Open an IIS Management Console and navigate down the application you are running this project in. (Create an application if you haven’t already) 5) In the application configuration features view, look for the “Handler Mappings” feature and double click it to see a list of existing handlers. 6) In the “Actions” area of this view, click the link to “Add Managed Handler” 7) Set up the new Managed Handler similar to the settings below. Request Path: eventswidget.jss Type: Demo1.Handlers.EventsWidget Name: EventsWidget 8) The handler should be ready to run now, so let’s check it in a browser. Open up your favorite browser and navigate to the application you’ve been working in. Add the request path you created in step 7 above to the URI to hit the handler you’ve created. You should see a friendly “Hello World!” message on the screen. Step 3 : Modify the HttpHandler to output JavaScript 1) Add a new directory called “templates” to hold script templates, and add a new file called core.js. This file will be used as a utility class to hold functions common to widgets. 2) Open /templates/core.js and add the following code to setup a namespace, constructor, and class definition for an object called Core. (Note: you should modify the namespace to match your own company or team) /** Microsoft.Com.SyndicationDemo.Core.js: shared functionality for widgets **/ // Namespace definition : All Widget Classes should be created in this namespace // if (!window.Microsoft) window.Microsoft = {}; if (!Microsoft.Com) Microsoft.Com = {}; if (!Microsoft.Com.SyndicationDemo) Microsoft.Com.SyndicationDemo = {}; // Widget Core Constructor // Microsoft.Com.SyndicationDemo.Core = function(message) document.write(message); // Widget Core Class Definition // Microsoft.Com.SyndicationDemo.Core.prototype = } var core = Microsoft.Com.SyndicationDemo.Core("Hello from JS!"); The above JavaScript code is using a feature of the JavaScript language to create a class definition, called Prototype-based Programming. When we “new” up a version of this Core object, it will run the code in the constructor and write out a message via JavaScript. 3) Add a method to WidgetBase which will allow us to read the contents of the JavaScript file we just created. Modify WidgetBase.cs by adding the following method: public string ReadFromFile(string fileName) { string fileContents = string.Empty; if (HttpContext.Current.Cache[fileName] == null) { string filePath = HttpContext.Current.Server.MapPath(fileName); if (File.Exists(filePath)) { StreamReader reader; using (reader = File.OpenText(filePath)) { fileContents = reader.ReadToEnd(); } } HttpContext.Current.Cache.Insert(fileName, fileContents, new System.Web.Caching.CacheDependency(filePath)); } else fileContents = (string)HttpContext.Current.Cache[fileName]; return fileContents; } 4) Modify EventsWidget.cs to read the JavaScript file using the method we just created. Change the content of the BuildOutput() method to the following: string output = base.ReadFromFile(ConfigurationSettings.AppSettings["CoreJavascriptFileName"]); return output; 5) Lastly, lets hook up this handler in a script block so that it actually runs in a script context. Add a new file called default.aspx to the root of the application. This file will serve as a test harness for our widgets. Add the following script block to the new page: <script type="text/javascript" src="eventswidget.jss"> </script> 6) At this point, we can now hit the new default.aspx in the browser. It should display a friendly message from JavaScript. If you do not see any message, try hitting the handler directly to see if it is throwing an error. In part two of this article, I will use this project as a starting point and make it dynamic. Part two will cover accessing data, passing data to JavaScript, and working with Styles via CSS. If you would like to receive an email when updates are made to this post, please register here RSS PingBack from Introduction This article is a continuation of an article about Web Widgets posted previously here: i cannot access the widget from another domain. any ideas? Which version of IIS are you using? I don't see the Add Managed Handler dialog box in the Microsoft Management Console version 3.0. Thanks, pcrabtree@raritantechnologies.com Crabtree, I am using IIS 7.0. The "Add Managed Handler" feature was not included in IIS 6.0. Try the following steps to find the feature. 1) Using MMC snap-in for IIS 7.0, click on the site or application you want to manage. This will display a list of features that you can manage, grouped by area (ASP.NET, IIS, Management). 2) Look in the IIS area for a feature called "Handler Mappings" and double click on it. This will take you to the "Handler Mappings" management feature. The main window displays a list of existing mappings, and the right pane displays a list of Actions you can take. 3) In the right pane, look for "Add Managed Handler" link and click on it. This will open the feature's dialog box, where you can follow the rest of the steps listed in my post above. You can also access this feature by right-clicking anywhere in the list of existing mappings. Hope that helps! Newbie, Can you provide any more information about the problem? Are you able to see any errors that might help me troubleshoot the issue? Also, you might try tools like Fiddler, httpWatch, or Firebug to see if there are networking or script errors with your widget. These types of debugging tools are absolutely invaluable when working with widgets! Great article! One question though, Is there any way to deploy/implement a .NET based widget with IIS 6.0 (which doesn't have a managed handlers option of IIS 7.0 Mgt Console)? Nissan, I haven't hooked this project up on an IIS 6.0 server, but you should be able to do the following to add an HttpHandler in IIS 6. The config setting for the Handlers in IIS 6.0 are stored in a different web.config section, see below... <system.web> <httpHandlers> <add verb="GET,HEAD" path="eventswidget.jss" type="Demo1.Handlers.WidgetBase, Demo1" validate="false" /> </httpHandlers> </system.web> Please comment back if this doesn't work for you! Hi Paul, Thanks for this share this work. Could you help me plz. I tried to use in net2 using one external Json.net library. My problem is not this, but after full compiling with no errors and tried I have one error during use like: abstract class not well formed in WidgetBase Demo1, same problem in Demo2 in C#2005. I have controled many times and verify definition of abstract classes but nothing. Some suggestion. Regards Hey Paul, I implemented the recommendation you posted, and while it worked fine on my VS 2008 development environment, once I checked in my changes and it built to the Windows 2003 Server testing box it resulted in a "The page cannot be found" error when trying to reference the eventswidget.jss file. Now to be honest my dev box is Vista not XP but I am debugging in VS 2008 Pro so it should have worked once deployed to the 2003 server running IIS 6.0 since it executed the web.config changes successfully to render the page on the VS 2008 internal web server. Any ideas on next steps to troubleshooting this error? Hi Nissan, If you deploy to IIS 6.0, the web.config file will store the httpHandlers in a different section. This would cause you to get the 404 error. Please try the following. I was just wandering if i could create some web widgets (using VS 2005/2008, C#). The code is superb and i could create and add my own widget to my local sample application. However, I have few questions... 1. Do I need Scripts to make the widgets work? 2. Is there any mechanism provided by ASP.NET to build widgets? 3. Is there any tool which could emit Javascript for me? 4. Can I use AJAX to create widgets? It'll be very grateful if you throw some light on the questions i have. Regards, Amit Gautam I am unable to download the code from the given link...i get a server error. Is there any other way i can get the code. Thanks Ramesh, Please give that link another try. I think you may have hit a temporary error on the Code Gallery site. I don't have the code hosted anywhere else, so hopefully that link will work for you the second time. If not, please let me know and I will see what I can do. Amit, I'll attempt to answer your questions inline below... In the scenario I've demonstrated, yes. There are many ways to make widgets thoough, and you could conceivably find a way to just inject HTML into the consuming page without using JavaScript. Try searching for other widgets on the web and you will find other technologies used to achieve the same purpose. With ASP.Net MVC framework and jQuery you could create widgets. I am looking into writing a part 3 of this article using ASP.Net MVC and jQuery. Look into script#. Also, if you are shy about JavaScript, the jQuery library can provide a much easier way for you to utilize the language with the huge learning curve. Yes, and you could use Ajax to load data into widgets as well. I was attempting to create a one-request widget with this article, so I avoided making additional Http calls via Ajax. If you use the ASP.Net Ajax library, you need to ensure the consuming site also uses this technology.
http://blogs.msdn.com/pajohn/archive/2008/06/18/web-widgets-with-net-part-one.aspx
crawl-002
en
refinedweb
Example: FtpWebRequest.Method = WebRequesMethods. will send the LIST ftp command on the wire and return you a detailed list of the contents of the specified folder. The result of executing the command on your ftp server will be in the ResponseStream returned by FtpWebResponse, except for the PWD, SIZE and MDTM methods. For the result of thos commands check the corresponding FtpWebRequest properties as specified here: First, let me state that I am not recommending to only use a certain type of authentication in your applcations. It is not secure to do so. However, in some cases, it is convenient to be able to do so - mainly for verification or for debugging purposes. Example 1: You do not own the server and you are not sure what type of authentication it supports. By using the code snippet below you can verify whether or not it supports Kerberos / whether or not it supports NTLM, whether or not it supports Basic, etc, etc. Of course you can verify this using Netmon or Ethereal traces and looking at the WWW-Authenticate header value the server is sending, but I find this way easier. Example 2: You have a custom server which only supports a certain authentication type that you want to test.Example 3: You have an intermittent authentication error which you suspect only happens when a certain type of authentication is used (for example Kerberos) but you’re not sure. Now to the point: forcing HttpWebRequest to use a certain type of authentication like NTLM, Kerberos, Negotiate, Digest, or Basic is very easy – you can achive it by using AuthenticationManager.Unregister() method to unregister all the other authentication modules supported by HttpWebRequest. For example, if you want it to force it to use Kerberos only you unregister Basic, Digest, NTLM and Negotiate modules. If you want to force it to use NTLM only you unregister Basic, Digest, Negotiate and Kerberos, and so on. using System; using System.Net; using System.IO; using System.Text; using System.Collections; using Microsoft.Win32; namespace Mssc.Services.Authentication { class TestAuthentication { private static string username, password, domain, uri; // This method invoked when the user does not enter the required input parameters. private static void showusage() { Console.WriteLine("Attempts to authenticate to a URL"); Console.WriteLine("\r\nUse one of the following:"); Console.WriteLine("\URL username password domain"); Console.WriteLine("\URL username password"); } // Display registered authentication modules. private static void displayRegisteredModules()); } // The getPage method accesses the selected page and displays its content // on the console. private static void getPage(String url) try // Create the Web request object. HttpWebRequest req = (HttpWebRequest)WebRequest.Create(url); req.Credentials = new NetworkCredential(username, password, domain); req.Proxy = null; // Issue the request. HttpWebResponse result = (HttpWebResponse)req.GetResponse(); Console.WriteLine("\nAuthentication Succeeded:"); // Store the response. Stream sData = result.GetResponseStream(); // Display the response. displayPageContent(sData); catch (WebException e) { // Display any errors. In particular, display any protocol-related error. if (e.Status == WebExceptionStatus.ProtocolError) { HttpWebResponse hresp = (HttpWebResponse)e.Response; Console.WriteLine("\nAuthentication Failed, " + hresp.StatusCode); Console.WriteLine("Status Code: " + (int)hresp.StatusCode); Console.WriteLine("Status Description: " + hresp.StatusDescription); return; } Console.WriteLine("Caught Exception: " + e.Message); Console.WriteLine("Stack: " + e.StackTrace); // The displayPageContent method display the content of the // selected page. private static void displayPageContent(Stream ReceiveStream) // Create an ASCII encoding object. Encoding ASCII = Encoding.ASCII; // Define the byte array to temporarily hold the current read bytes. Byte[] read = new Byte[512]; Console.WriteLine("\r\nPage Content...\r\n"); // Read the page content and display it on the console. // Read the first 512 bytes. int bytes = ReceiveStream.Read(read, 0, 512); while (bytes > 0) Console.Write(ASCII.GetString(read, 0, bytes)); bytes = ReceiveStream.Read(read, 0, 512); Console.WriteLine(""); //Initialize the Uri and the credentials private static void Init(string[] uriAndCreds) if (uriAndCreds.Length < 3) showusage(); return; // Set the uri and the user credentials and uri = uriAndCreds[0]; username = uriAndCreds[1]; password = uriAndCreds[2]; if (uriAndCreds.Length == 3) domain = string.Empty; else // If the domain exists, store it. // By default the domain name is the name of the server hosting the Internet resource. domain = uriAndCreds[3]; public static void Main(string[] args) Init(args); Console.WriteLine("Listing all authentication modules before unregistering"); displayRegisteredModules(); // Unregister the standard Basic, NTLM and Negotiate and Digest modules, leaving only Kerberos AuthenticationManager.Unregister("Basic"); AuthenticationManager.Unregister("NTLM"); AuthenticationManager.Unregister("Negotiate"); AuthenticationManager.Unregister("Digest"); //AuthenticationManager.Unregister("Kerberos"); // Display what Authentication modules are left registered // Read the specified page and display it on the console. getPage(uri); return; } //end Main() } //end class TestAuthentication } As you may have already noticed, the FtpResponseStream does not contain the result of PWD, SIZE and MDTM methods. You can get those from the FtpWebResponse properties For a detailed blog article on how to use System.Net Tracing go here note that this feature is available in versions of the .Net Framework 2.0 (and above). In this concrete example I'll be using HttpWebRequest, but you can use any other System.Net API that supports SSL. As an example I shall use the following post: The customer is attempting to access an https site but is getting a socket error: An existing connection was forcibly closed by the remote host. We enable tracing and the last thing we see in the log is: System.Net Information: 0 : [3284] InitializeSecurityContext(credential = System.Net.SafeFreeCredential_SECURITY, context = 197368:1ff9d60, targetName = 212.77.100.18, inFlags = ReplayDetect, SequenceDetect, Confidentiality, AllocateMemory, InitManualCredValidation) System.Net Information: 0 : [3284] InitializeSecurityContext(In-Buffer length=9, Out-Buffer length=182, returned code=ContinueNeeded). System.Net.Sockets Verbose: 0 : [3284] Socket#49212206::Send() System.Net.Sockets Verbose: 0 : [3284] Data from Socket#49212206::Send [ ……Here we receive data: omitting for clarity…..] System.Net.Sockets Error: 0 : [3284] Exception in the Socket#49212206::Receive - An existing connection was forcibly closed by the remote host System.Net.Sockets Verbose: 0 : [3284] Exiting Socket#49212206::Receive() -> 0#0 System.Net.Sockets Verbose: 0 : [3284] Socket#49212206::Dispose() System.Net Error: 0 : [3284] Exception in the HttpWebRequest#33574638:: - The underlying connection was closed: An unexpected error occurred on a send. System.Net Error: 0 : [3284] Exception in the HttpWebRequest#33574638::EndGetResponse - The underlying connection was closed: An unexpected error occurred on a send. So how do we know what happened? In most cases we will see certificate errors and it will be easy to determine the cause, but in this case we do not see any. Why? The answer is that we were not able to successfully establish secure connection with the server – the ssl negotiation didn’t succeed and the server closed the connection. This s is the reason why the CertificateValidationCallback was not called at all - the server closed the connection before sending the certificates. In this case the problem was indeed the server: we try to use TLS first and if it doesn’t succeed we try to use SSL3 but the server immediately dropped the connection. So we explicitly set the protocol to be SSL3.ServicePointManager.SecurityProtocol = SecurityProtocolType.Ssl3; which resolved the problem. Now you could see the certificates in there System.Net Information: 0 : [0780] Remote certificate: [Version] V1 [Subject] E=mzielinski@wp-sa.pl, CN=w.wp.pl, OU=Pion Technologii Informatycznej, O=Wirualna Polska S.A., L=Gdansk, S=Pomorskie, C=PL Simple Name: w.wp.pl DNS Name: w.wp.pl. [Issuer] E=mzielinski@wp-sa.pl, CN=Wirtualna Polska Private Certification Centre Class 2, OU=Pion technologii Informatycznej, O=Wirtualna Polska S.A., L=Gdansk, S=Pomorskie, C=PL Simple Name: Wirtualna Polska Private Certification Centre Class 2 DNS Name: Wirtualna Polska Private Certification Centre Class 2 [Signature Algorithm] md5RSA(1.2.840.113549.1.1.4) [Public Key] Algorithm: RSA Length: 1024 Key Blob: 30 81 89 02 81 81 00 bf ff ab 80 08 bb 39 e1 c0 97 64 75 1e ac ee 5e b8 84 8c eb e9 26 25 a5 77 6d 66 fa d3 dd 71 41 b5 87 8a 1f d4 08 8c ba 40 c.... …………. You can also see the certificate errors clearly logged. System.Net Information: 0 : [0780] SecureChannel#34576242 - Remote certificate has errors: System.Net Information: 0 : [0780] SecureChannel#34576242 - Certificate name mismatch. System.Net Information: 0 : [0780] SecureChannel#34576242 - A certificate chain could not be built to a trusted root authority. System.Net Information: 0 : [0780] SecureChannel#34576242 - A required certificate is not within its validity period when verifying against the current system clock or the timestamp in the signed file. Now the negotiation won’t succeed because of certificate errors which you could clearly see described in the log. Variant 1: When using TcpListener class for our server there are 2 ways to get the underlying client TcpClient client = listener.AcceptTcpClient(); IPEndPoint remoteEP = (IPEndPoint) client.Client.RemoteEndPoint; or Socket client = listener.AcceptSocket(); IPEndPoint remoteEP = (IPEndPoint) client.RemoteEndPoint; Variant 2: When using the Socket class: Socket client = socketServer.Accept(); Then we can very easy get the IPAddress/Port for the client IPAddress ip = remoteEP.Address; int port = remoteEP.Port; The sample below used SmtpClient to send e-mail from your gmail account using your gmail username and password. using System.Net.Mail; namespace GMailSample class SimpleSmtpSend static void Main(string[] args) SmtpClient client = new SmtpClient("smtp.gmail.com", 587); client.EnableSsl = true; MailAddress from = new MailAddress("YourGmailUserName@gmail.com", "[ Your full name here]"); MailAddress to = new MailAddress("your recipient e-mail address", "Your recepient name"); MailMessage message = new MailMessage(from, to); message.Body = "This is a test e-mail message sent using gmail as a relay server "; message.Subject = "Gmail test email with SSL and Credentials"; NetworkCredential myCreds = new NetworkCredential("YourGmailUserName@gmail.com", "YourPassword", ""); client.Credentials = myCreds; client.Send(message); catch (Exception ex) Console.WriteLine("Exception is:" + ex.ToString()); Console.WriteLine("Goodbye."); }: For Many
http://blogs.msdn.com/mariya/default.aspx
crawl-002
en
refinedweb
I posted about Giving a character a new identity (by giving it some secondary weight). Now that post, while true, only tells part of the story. Now I am going to tell the other part.... Take the following code and you may be able to see where I am going before you even look at the results:')); The results? They will be: -1-1-10-1-1-10 -1-1-10-1-1-10 So what's the problem? Why does System.String.IndexOf(Char) behave differently than System.String.IndexOf(String), System.Globalization.CompareInfo.IndexOf(String, Char), and System.Globalization.CompareInfo.IndexOf(String, String), anyway? Well, setting aside my disdain for all of the System.String shortcuts to globalization functionality that makes the real linguistics features of the System.Globalization namespace that much harder for developers both inside and outside of Microsoft to find (never mind the additional confusion about the confusing and incomplete flags they add), there is the fact that the System.String "shortcut" methods often contain actual shortcuts to try to be more performant, to try to keep from calling the "slower" globalization methods. So this particular issue can be looked at as an over-optimization, a case where developers assumed that they would not need to call the "slower" method in this situation. Were they wrong? Well, in my view, yes. All of these shortcut methods are just plain bad if they ever do anything other than call the real methods in the System.Globalization namespace. Anything else makes for less maintainable code that requires modifying multiple bits if there are ever changes or problems to fix, and it is harder for testers to track all of these different places to verify correct behavior in. Of course now I suppose it would be in some people's minds a breaking change to fix the errant method. So let's make it more interesting and raise the stakes:')); The results here? You know in this "Swedish "A-Ring" case? -1-1-1-1-1-1-10 -1-1-1-1-1-1-10 So, that over-optimization is causing behavior differences in strings that are canonically equivalent in Unicode, to wit LATIN SMALL LETTER A WITH RING ABOVE versus LATIN SMALL LETTER A + COMBINING RING ABOVE. And that is a bug, suggesting that just taking out this over-optimization case might be in everyone's best interests.... (Using the Swedish or Japanese results above is not required; it just makes the weirdness look worse. The bug is there either way) This post brought to you by å (U+00e5, a.k.a. LATIN SMALL LETTER A WITH RING ABOVE) If you would like to receive an email when updates are made to this post, please register here RSS Actually, my opinion goes the other way: the System.String "shortcuts" should never have called the System.Globalisation methods. It's not really a matter of "optimization" vs. "user expectations" because there really ARE some cases where you want the code-point behaviour. Doing it that way would make System.String a simple "array of code-points" and the methods on it work that way. The System.Globalization methods are the actual lingustics methods. Ah well, too late for all this anyway :-) Hello Dean, If people want the code point behavior, then the ORDINAL methodology is available. These shortcuts are actually designed to work linguistically, they just don't always do so.... <<make System.String a simple "array of code-points">> Two opinions on this kind of things: 1. All functions handling one character should be removed. Completely. From .NET, Win32 API, C++. Because doing anything on one character is a problem: search, compare, changing case, you name it. Everything should be done on strings, return strings, and so on. It is the only way to get correct linguistic results. 2. The storage should be separated from the string itself. You need access to the code points, you access the storage explicitly. Then you will be able to do stuff like this: string str = "\u0061\u030a"; str.length(); // gives you linguistic info str.storage.length(); // gives you storage info (code points) The storage is locale-independent, the string is not. And the intention is always clear. Ok, some more thinking on what can go wrong is needed, but these are the general ideas. Well, actually, we use a hybrid approach: 1) For most purposes we use your #1 2) For NLS collation functions that take an LCID, we do #2 plus (we include other constructs like sort elements). Mihai: I don't think I disagree, specifically. My suggestion would just have been that the System.String class be your "string.storage" and SOMETHING ELSE be the linguistic stuff. I guess that's just a product of where I usually work, though. Most of my string manipulation stuff (in my day job) come from manipulating email and SMS messages, both of which are predominately ASCII-based (at least, at the level I work on - the raw protocols). If most of my work was on web pages, or a text editor or something then I suppose I'd go with your suggestion... You can't please everybody :-) The only problem with THAT idea (which by the way there are people on the BCL team who would have preferred that approach in retrospect) is that there would be no linguistic support in the vast majority of apps. And I just can't be a complerte fan of that sort of approach.... :-) <<My suggestion would just have been that the System.String class be your "string.storage" and SOMETHING ELSE be the linguistic stuff.>> Technically, it does not matter how you call things. But for the perception of the one reading the code, it does (think #define BOOL int, before bool was standard). A string is something containing text, and text is associated with linguistic properties in one's mind. And since System.String has stuff like ToUpper, it is already "too dirty" to be a plain storage (because ToUpper is a locale-sensitive operation). So I really think that String *is* the right thing for linguistic behavior. <<Well, actually, we use a hybrid approach>> Maybe in the implementation. But the idea was to make this explicit, for all programmers to see, not just an internal representation thing. When I see str.length() and str.storage.length(), the intention becomes instantly clear, without even reading the doc. It is probably too late to do this, without breaking backward compatibility. And I was also talking about C++, which is outside MS control :-) And the idea was philosophical anyway. I don't really expect that <<All functions handling one character should be removed. Completely. From .NET, Win32 API, C++.>> Who am I, who's going to listen to me? :-D So I can't tell whether you think the anomaly I mentioned in this post about System.String.IndexOf(Char) is a bug to be fixed or a backcompat issue to be left alone? :-) Bug :-) Since String is a linguistic thing, I would expect linguistic behavior. If you ever expose something like System.String.Storage.IndexOf(Char), then that should work on coding units. What I think would make sense (at least for me :-) string st2 = "\u0061\u030a"; // Linguistic behavior ci.IndexOf(st2, "a"); // -1 // Remove as per rule #1 ci.IndexOf(st2, 'a'); // undefined API error // Linguistic behavior st2.IndexOf("a"); // -1 // Remove as per rule #1 st2.IndexOf('a'); // undefined API error // Add this API, with non-linguistic behavior // and not affected by CultureInfo // working on coding units st2.Storage.IndexOf("a"); // 0 // DO NOT add this API, // because non-linguistic behavior // in a CultureInfo context is dumb ci.Storage.IndexOf("a"); // undefined API error <<System.String.IndexOf(Char) is a bug to be fixed or a backcompat issue to be left alone?>> I think I did not answer the question. It is clear it is a bug, but to fix, or not to fix, this is the question? Sounds like you are trying to push me in a corner :-) Well, if it can be fixed without breaking compatibility, then yes, fix it :-) Check with Raymond Chen :-D Sting str="C:\Documents and Settings\asriv5\Desktop\Login.jsp";index i=str.lastIndexOf("\"); // not working why ??? plz give me the sol... anupam
http://blogs.msdn.com/michkap/archive/2007/02/17/1701561.aspx
crawl-002
en
refinedweb
Here is the sample code, SPWeb web = new SPSite("").OpenWeb(); SPField spField = web.Fields["MyChoice"]; SPFieldChoice choiceFields = (SPFieldChoice)spField; string[] choices = new string[3] {"x", "y", "z"}; foreach (string choice in choices) { choiceFields.Choices.Add(choice); } choiceFields.Update(); Keep Exploring... :) How do you add a new Site Column to a Content Type using the MOSS object model? Here is the sample code SPWeb web = new SPSite("").OpenWeb(); SPContentType myCT = web.ContentTypes["myNewContentType"]; myCT.FieldLinks.Add(new SPFieldLink(web.Fields["abc"])); myCT.Update(); Note : The following MSDN article speaks about “Updating Child Content types”. The example given in the article uses the SPFieldCollection object (SPContentType.Fields) to add columns to the content type. But when we use the sample given in the article we end up with the error message “SPException: This functionality is unavailable for field collections not associated with a list.”. I found that the columns are not added into content type and that are referenced in content type from the following article. Also the above article “How to: Reference a Column in a Content Type” says, The Fields property returns an SPFieldCollection object. Each SPField in this collection represents a "merged view" of the base column definition and any overwritten properties specified in the column reference in the content type. Because of this, you cannot add columns directly to this collection. Attempting to do so results in an error. Keep Coding... SharePoint's Webs webservice can be used to add/delete/update site columns. Unfortunatley MSDN/SDK doesnot has the sample yet. Here I provide the sample code. //webs webservice object localhost.Webs websService = new ContentTypeAndSiteColumn.localhost.Webs(); //url for the webservice websService.Url = ""; //credential websService.Credentials = System.Net.CredentialCache.DefaultCredentials; //xmldocument object XmlDocument xDoc = new XmlDocument(); //Fields to be added XmlElement newFields = xDoc.CreateElement("Fields"); //Fields to be edited XmlElement updateFields = xDoc.CreateElement("Fields"); //Fields to be deleted XmlElement deleteFields = xDoc.CreateElement("Fields"); newFields.InnerXml = "<Method ID='1'><Field Name='FieldName' DisplayName='FieldDisplayName' Group='Group Name' Type='Text'/></Method>"; newFields.InnerXml = "<Method ID='2'><Field Name='FieldName' DisplayName='FieldDisplayName' Group='Group Name' Type='Text'/></Method>"; deleteFields.InnerXml = "<Method ID='3'><Field Name='FieldName'/></Method>"; XmlNode returnValue = websService.UpdateColumns(newFields, updateFields, deleteFields); MessageBox.Show(returnValue.OuterXml); Related Links ---------------- Keep Coding.... :) Unfortunately MCMS 2002 Site Manager doesnt allow you to rename the resource gallery item. In some situation you may end up duplicate entries in the resource gallery item names. MCMS 2002 allows you to have multiple resource items with same name. The items will be differentiated by using their guids. But this will endup a problem when you are migrating to MOSS 2007, and you will get "Leaf names are not unique." error message. And the suggestion is you have to rename the resource gallery item. Here is the sample code to rename the resource gallery item. // Get the MCMS application context. CmsApplicationContext cmsContext = new CmsApplicationContext(); // Logon to MCMS: // Assumes that you have Windows Authentication turned on. WindowsIdentity ident = WindowsIdentity.GetCurrent(); // Make sure to set the context to Update mode (which means you can write // back to MCMS using PAPI). cmsContext.AuthenticateUsingUserHandle((System.IntPtr) ident.Token.ToInt32(),PublishingMode.Update); ResourceGallery rsg = cmsContext.Searches.GetByPath("/Resources ") as ResourceGallery; Resource res = rsg.Resources[“Content Development”]; res.Name = "EMP1.gif"; res.DisplayName = "EMP1.gif"; // Commit changes back to MCMS. cmsContext.CommitAll(); Keep coding :) Here you go for the sample code. try { //create imaging web service object localhost.Imaging img = new ImagingTest.localhost.Imaging(); //set url img.Url =; //Set credential img.Credentials = System.Net.CredentialCache.DefaultCredentials; //for results System.Xml.XmlDocument resdoc= new System.Xml.XmlDocument(); System.Xml.XmlNode resnode = resdoc.CreateNode(System.Xml.XmlNodeType.Element,"Result",""); System.IO.FileStream fs = new System.IO.FileStream("c:\\n15.jpg", System.IO.FileMode.Open, System.IO.FileAccess.Read); //Get the content byte[] content = new byte[fs.Length]; //store the content fs.Read(content, 0, (int) fs.Length); //Upload the file - specify "" for root folder, for sub folders specify 1, 2, 3 relative to the folder number resnode = img.Upload("123", "", content, "n15.jpg", true); MessageBox.Show(resnode.OuterXml); } catch(Exception ex) { MessageBox.Show(ex.ToString()); } Problem Background : When you enable "user must change password at next logon", then the corresponding user will face the unauthentication issues. To resolves this, you may enable IISADMPWD for WSS How to enable the IISADMPWD for WSS? Now you will be able to get the IIS password management page when the “User must change password at next logon” selected. [Keep Using SharePoint] Note: Modifying the default files is not supported by Microsoft. Microsoft recommends using custom Site Definition. Note: Modifying the default files is not supported by Microsoft. Microsoft recommends using custom Site Definition. . After configuring the details, if you create new document library based on the list definition, then the document library event handler will be automatically configured. If you install a wppack by using stsadm with slash "/", then you will not be able to uninstall it by using the stsadm tool :( . The story not endup here, you will not be able to install the latest version of your webpart :(. However SharePoint object model helps us to resolve the problem. Run the following code to remove the wppack that has "/" in the path. Then install your webpart by using stsadm by using the "\" in the path. Example, running the following command will endup with the above problem. stsadm -o addwppack -filename "Helloworld/TestDeploy.CAB" SPGlobalAdmin oGA = new SPGlobalAdmin();oGA.RemoveWPPack("helloworld/testdeploy.cab",0,null,null); To enable all rights at the virtual server level vServer.Config.Properties["virtualserverpermsmask"] = "-1"; To disable the “Manage Site Groups” permission at the virtual server level vServer.Config.Properties["virtualserverpermsmask"] = "2113867535"; Keep Coding ;) There are two methods, SaveChanges and SaveProperties provided by WSS which sometimes misleading users. Here I provide details for which method to use and when to use. SPWebPartCollection.SaveChanges() - This method is used to save the changes made in the web part from out side the web part code. For e.g., the console application, windows application etc.., SPWebPartCollection.SaveProperties - This method is used to save the properties within the web part code. Problem : You are localizing the SharePoint web parts. You localiz display text and custom property names/descriptions using the ResourceManager and satellite assemblies. However, the web part “dwp” files in the cab file contain a title and description for each web part. You want to localize these values as per the msdn article.“Packaging and Deploying Web Part for Microsoft Windows SharePoint Services”, when you change the locale in SharePoint, it is not using the localized dwp files. It is using the localized display text and property names/descriptions. Details : - There are language templates used to create sites/webs. For example, 1033 folder is for English, 1031 is for German and etc. A site should be created by using anyone of this language template. SharePoint uses this language template and render the site based on the template. The site navigation, site data and Add Web Parts dialog all these settings uses the language template. Once we created the site by using the language template we cannot change these settings. - The localized web parts can be created according to the language template we used. - The wpcatalog folder is a place to install the *.dwp files. The files found in this directory will be used to display the details in the “Add Web Parts” window - Any sub folder found in the wpcatalog folder with language name (en-US, de-DE, etc..) will be used to display the web part list in the “Add Web Parts” window according to the language template used to create the site. For example, if we create any site by using the German language template (1031), then the files under the de-DE folder will be displayed in the “Add Web Parts” window and the files under the en-US and other language will not be displayed. The files found in the wpcatalog root folder will be displayed for all the sites. - If you want to localize the *.dwp files then in your particular scenario you have to create two sites one is by using the 1033 template and another one is by using the 1031 template. Obviously you will end up having two different sites and one is for English and another one is for German. - The “regional settings” under the site settings option is used for setting the date format, and currency formats. Changing the regional settings wont affect the site settings such as “Site Navigation”, “Site data” and the “Add Web Parts dialog box” - The .net code in the web parts and the web controls use the “regional settings” found in the site and renders accordingly Consider the following scenario. You installed WSS in Account Creation Mode. If a user forgets their password, they currently have to contact the admin and they in turn need to go into the site administration and manually reset the password. This issue has become more prevalent and will detract the Admin from normal duties.So I would like to have a way to automate password rests for the users. Users will be provided a webpart of other link that will reset the password and send out an email with the new password to the users account email address. Here you go for a sample code. public class MyChangePassword : Microsoft.SharePoint.WebPartPages.WebPart { //Button for Change Password Button btnChangePassword; string strError = ""; protected override void CreateChildControls() { //Create button btnChangePassword= new Button(); btnChangePassword.Width = 100; btnChangePassword.Text = "Change Password"; btnChangePassword.Click += new EventHandler(btnChangePassword_Click); this.Controls.Add(btnChangePassword); } private void btnChangePassword_Click(object sender, EventArgs e) { try { //Gets the context of the web SPWeb webContext = SPControl.GetContextWeb(Context); string strLoginName = webContext.CurrentUser.LoginName; //Get the impersation context WindowsImpersonationContext wic = CreateIdentity("username", "domain", "password").Impersonate(); SPUtility.ChangeAccountPassword(webContext, strLoginName, "", "newpassword"); //undo the impersonation context wic.Undo(); strError = "Password Changed"; } catch(Exception ee) { strError += ee.ToString(); } } protected override void RenderWebPart(HtmlTextWriter output) { btnChangePassword.RenderControl(output); output.Write(strError); }", SetLastError=true)] private)] private extern static bool CloseHandle(IntPtr handle); } The following code will fail if the user doesnot have Admin previlage SPRoleCollection RC = CurrentWeb.CurrentUser.Roles; The CurrentWeb.CurrentUser.Roles method requires Manage Site Groups permission and the impersonation does not work with SPRoleCollectionRESOLUTION:-Write a custom sharepoint webservice and pass the administrator (networkcredentials) credentials to the webservice to retrieve the roles.
http://blogs.msdn.com/karthick/
crawl-002
en
refinedweb
One of the often requested features of NUnit was the ability to test private member variables and private methods. I resisted because I always felt that if you limited yourself to the public interface that enabled you to freely change the implementation without having to change the tests. This I believe is a good thing because if the tests require a lot of maintenance you might be tempted to not do it. However, the flip side of the equation is you may end up exposing a method or worse yet a member variable only for testability. The downside of this is that once it is present in the interface it could be used for something else. So, as usual there are arguments on either side. .NET, through reflection, provides the ability to invoke private methods and see the values of private member variables so it is not impossible to do this even today. However, the unit testing tool in VS Team System has a built-in wrapper, called PrivateObject, to make the syntax a bit easier to digest. Here is an example: using public [TestClass]public class PrivateTest{ private PrivateObject privateObject; [TestInitialize] public void Initialize() { TestedClass testedClass = new TestedClass(); privateObject = new PrivateObject(testedClass); } [TestMethod] public void PrivateField() { bool field = (bool)privateObject.GetField("privateField"); Assert.IsTrue(field); } [TestMethod] public void PrivateMethod() { bool result = (bool)privateObject.Invoke("PrivateMethod"); Assert.IsTrue(result); }} As with all capabilities there is a right time and a wrong time to use them. I will be spending more time on this over the next few weeks but I would really like to get your feedback about this and recommendations on practice. This posting is provided "AS IS" with no warranties, and confers no rights. PingBack from Trademarks | Privacy Statement
http://blogs.msdn.com/jamesnewkirk/archive/2004/06/07/150361.aspx
crawl-002
en
refinedweb
Last week one ISV asked me that he wanted to hide/remove the "New" or "Actions" menu of the Document Library Toolbar in Sharepoint 2007, maybe this would be great needs for you, now I post the way to realize this: To realize this, you need to override the .ascx based toolbar templates by using a custom template that overrides a toolbar, and thereby customize a toolbar. Please save the following code into nonewbuttonfordoclibtemplates.ascx, and then drop it into C:\Program Files\Common Files\Microsoft Shared\web server extensions\12\TEMPLATE\CONTROLTEMPLATES and IISRESET, you should be missing the New button on your document library toolbars. <%@ Control Language="C#" AutoEventWireup="false" %> <%="SPHttpUtility" Assembly="Microsoft.SharePoint, Version=12.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c" namespace="Microsoft.SharePoint.Utilities"%> <%@ Register TagPrefix="wssuc" TagName="ToolBar" src="~/_controltemplates/ToolBar.ascx" %> <%@ Register TagPrefix="wssuc" TagName="ToolBarButton" src="~/_controltemplates/ToolBarButton.ascx" %> <SharePoint:RenderingTemplate <Template> asdfasfasdfasdf <wssuc:ToolBar <Template_Buttons> <SharePoint:ActionsMenu AccessKey="<%$Resources:wss,tb_Actions> </SharePoint:RenderingTemplate> Hi, Great post and very insightful. It's amazing how your single post has turned out to be more helpful than any article I have read on MSDN about customizing styles. Maybe I have been looking in the wrong place (but I have been looking for hours). My first questions is: I am trying to customize the style for the toolbar but only for a specific site. Where is the style called "ms-menutoolbar" and how can I edit the style so it only affects the toolbar in my top level site collection AND the subsites? I do NOT want the style change to affect other site collections. My next question is more related to your post but also related to my last question: How can I change the toolbar to only apply to the site collection and all subsites but not to all site collections? Thanks! Thank you very very much this was helpfull Hi! There is simplest way to hide these items. Open your page in SP Designer. Right click on web part and select "Convert to XSLT Data View". Next, you can click on toolbar's items and remove them by pressing DEL. BUT! It will be not WebPart any more! Regards, Cinek How can I add new buttons on the toolbar? can i change the name of items in the toolbar(eg. instead of "Actions"---->"you can" or instead of "new"--->"my text" ) Thanks, Really Helpful Do u have idea how we can add new button on the toolbar? if yes, where will the button action go? Thanx for this tip! Unfortunately this works for the complete site. Do you know if it's possible to combine this with a feature, so I can deal this with only the documentlibraries in one single subsite? With kind regards, René van der Enden Here is code for a Picture Library. I needed to remove the Actions Menu but keep the Settings and Upload. <%@ Control Language="C#" AutoEventWireup="false" %> <SharePoint:RenderingTemplate <Template> <wssuc:ToolBar <Template_Buttons> <SharePoint:UploadMenu AccessKey="<%$Resources:wss,tb_Upload> I need to hide the Manage Permission and Manage Copies button form the toolbar of DispForm.aspx in picture library. This should be application only to a particular subsite. Other picture library should not be affected. Plz. suggest a way to do so. Has anyone figured out how to do this only to particular sites or subsites? Doing this across the board on the server isn't really an option. Thanks! Hi, i have read this article and want to ask some questions. As mentioned in previous comments, button of the toolbar can be hide, but after hide the button, it would not be the webpart. I want to know that how can i hide an items of the "Action" Menu. For example, i want to hide the "Open with Windows Explorer" of the "Action" Menu. How can i do it. I had tried to find the toolbar.ascx template from the path "C:\Program Files\Common Files\Microsoft Shared\web server extensions\12\TEMPLATE\CONTROLTEMPLATES", but it seems to be useless. I also tried to use SP Designer to view the webpart as XSLT format, i only found this "<SharePoint:ActionsMenu" but how can i access and modify the menuitem from the code. Here's how to do this only for certain sites: Add code to the new ascx that checks the SPContext object, and adds the appropriate version of wsuuc:ToolBar depending on which site you're on, which template it uses (.WebTemplate + .Configuration), or just about anything else you like. Then you can have the standard toolbar for ordinary sites, and the custom one for only a few. As for hiding elements, you can create your own class that inherits from ActionsMenu, NewMenu, etc., and remove the items you don't want. Then refer to that class in the Template_Buttons section. In the class, just override AddMenuItems, call base.AddMenuItems() and afterwards loop through MenuTemplateControls.Controls, and remove or change as you like. Alternatively you can delete the call to base.AddMenuItems, and use AddMenuItem to add the items that you do want. You can also inherit directly from ToolBarMenuButton, (not much difference). Thanks for Bjørn Stærk. Useful inforamtion. But i would like to ask some stupid question. How can i inherits that ActionsMenu? As i guess, Sharepoint class(.cs/.vb) is generated as .dll format, so we only can inherits it? As you said, "Then refer to that class in the Template_Buttons section." What is the meaning of "Template_Buttons section". A file name? or... It is because I can't find this file by the search function. I have been trying to figure out this: Why does Read-only-members have access to the option "Open with Windows Explorer" ? Throught the Explorer these members can edit and delete files! So I don't want to hide it, cause it should be available for the site members with full control rights. :\ Is is because this is the requirement of my cilent. He would like to create a group of accouns that has the permission to Read Write Execute of the files. The main purpose of hiding the "Open with Windows Explorer" is to ensure users to input some attribute for each file, or to force to run the workflows or events during upload the files by the SharePoint web-based interface. To ensure the information consistency, i must hide "Open with Windows Explorer". I am not sure what you meant, but what I want is this: Read members should NOT see the "Open in windows explorer" option How do you hide it? Thanks again Hi Bjørn Stærk, Thanks for your help to hide the elements in the New menu.You made my day. When i am creating a custom control for NewMenu how can i add custom action through feature in my custom new menu.I tried with all the possible Location and groupid but the feature did not identify my custom menu. what should be the location for my custom menu instead of Microsft.SharePoint.StandardMenu. Please let me know if you have any input. I've tried this code and can't get it to work there is no chnage to my document libarary what am I not doing thanks Dave I've tried the sample code abopve and cannot get it to work. I would like to remove the multiple upload options from the Document Library toolbar. Is there any way to do this. Very nice post. It helped me a lot. But i need one more thing. Do you know any way to hide dictionary from the toolbar of edit forms? Hi Thx's How to hide the same with given sub site. I want to remove "Actions" and "Uploads" only if user is "Contributor" role. Can this solution do? Thank you very much. Very Nice article. By reading this document I can Hide New button on "Document Library" and "Picture Library". Can you please let me know, How do I hide New button on the document library created based on "Custom document library template" ? And another thing, How do I deploy this ascx control using Feature? Thanks in Advance. I have a similar question as Rupali...how can I do this with other list types? i.e. Announcements or Custom List. Hi I want to do this in a listview webpart which is a child control of my WP, is this possible? i want to know how can i hide/remove the multiple upload button from the upload menu. same problem here, need to remove/hide the single upload button from the upload menu. no luck on any trials. anyone can help? ah, got a solution, not necessarily a good one, but works for my need. Would anyone be willing to post the code to hide the Export to excel option so that I can remove it for the survey function of the site. I am not a programmer so simple instructions such as the ones listed above would do wonders for me. Thank you so much!!! Shanea Hiding toolbar menu items in a Sharepoint list Something similar. Removing the Attachment button from the Form Tool bar on NewForm.aspx and EditForm.aspx Новая инфа: Как убрать элементы в стандартном toolbar'е в формах изменения, создания, просмотра элемента... If you would like to have some paragraphs translated from Polish to English please leave a comment. Post But, how are you pointing the lists to now use nonewbuttonfordoclibtemplates.ascx rather than what it wants to use? I test this code,but it's not work... How to hide New Menu of Custom List i need to remove upload and settings menu in form library tool bar. This really is a great post and it's about the closest I've come to resolving my issue. I need to simply hide the edit button on the menu bar. This is for the calendar and the menu that I'd like to customize is the displayed in the dispForm. This view shows the edit function but again this is where I have a need to remove the 'edit button'. In doing so I'm hoping to remove users ability to edit items as this should only be an admin function. tmacon@cio.sc.gov I have done that using javascript .. check this article .. Has anyone figured out how to do this only to particular sites/subsites/Document Libraries only? Doing this across the board on the server isn't really an option as all the Document Libraries on that particular Site/Subsite takes into effect.Thanks! Hi can we do this from Schema.xml SharePoint Customizing SharePoint Context Menus Hidden SharePoint Lists, Fields, and other Advanced List it's nice article which saves my time. I want hide "view" i mean creat view and modify view options in the list menu bar.How do you do that Can anyone give me a javascript which I can use in a content editor webpart to Hide "Edit in Microsoft Office InfoPath" and "Edit Properties" menu items? Please dont refer me to some other blog. Thanks :) I already have one function like below: != "Edit in Microsoft Office InfoPath" && wzText != "View Properties") if(wzText != "Edit Properties") AChld(p,mo); return true; } BUT this is working only when I am in ".....?PageView=Shared" view of the page. NOT in AllItems.aspx page. Take a look at this Download from codeplex/features If you would like to receive an email when updates are made to this post, please register here RSS Trademarks | Privacy Statement
http://blogs.msdn.com/dipper/archive/2006/10/05/How-to-Remove-or-hiding-items-in-List-toolbar-in-Sharepoint-Server-2007.aspx
crawl-002
en
refinedweb
A language design question was posted to the Microsoft internal C# discussion group this morning: "Why must overloaded operators be static in C#? In C++ an overloaded operator can be implemented by a static, instance or virtual method. Is there some reason for this constraint in C#?" Before I get into the specifics, there is a larger point here worth delving into, or at least linking to. Raymond Chen immediately pointed out that the questioner had it backwards. The design of the C# language is not a subtractive process; though I take Bob Cringely's rather backhanded compliments from 2001 in the best possible way, C# is not Java/C++/whatever with the kludgy parts removed. Former C# team member Eric Gunnerson wrote a great article about how the process actually works. Rather, the question we should be asking ourselves when faced with a potential language feature is "does the compelling benefit of the feature justify all the costs?" And costs are considerably more than just the mundane dollar costs of designing, developing, testing, documenting and maintaining a feature. There are more subtle costs, like, will this feature make it more difficult to change the type inferencing algorithm in the future? Does this lead us into a world where we will be unable to make changes without introducing backwards compatibility breaks? And so on. In this specific case, the compelling benefit is small. If you want to have a virtual dispatched overloaded operator in C# you can build one out of static parts very easily. For example: public class B { public static B operator+(B b1, B b2) { return b1.Add(b2); } protected virtual B Add(B b2) { // ... And there you have it. So, the benefits are small. But the costs are large. C++-style instance operators are weird. For example, they break symmetry. If you define an operator+ that takes a C and an int, then c+2 is legal but 2+c is not, and that badly breaks our intuition about how the addition operator should behave. Similarly, with virtual operators in C++, the left-hand argument is the thing which parameterizes the virtual dispatch. So again, we get this weird asymmetry between the right and left sides. Really what you want for most binary operators is double dispatch -- you want the operator to be virtualized on the types of both arguments, not just the left-hand one. But neither C# nor C++ supports double dispatch natively. (Many real-world problems would be solved if we had double dispatch; for one thing, the visitor pattern becomes trivial. My colleague Wes is fond of pointing out that most design patterns are in fact necessary only insofar as the language has failed to provide a needed feature natively.) And finally, in C++ you can only define an overloaded operator on a non-pointer type. This means that when you see c+2 and rewrite it as c.operator+(2), you are guaranteed that c is not a null pointer because it is not a pointer! C# also makes a distinction between values and references, but it would be very strange if instance operator overloads were only definable on non-nullable value types, and it would also be strange if c+2 could throw a null reference exception. These and other difficulties along with the ease of building your own single (or double!) virtual dispatch mechanisms out of static mechanisms, makes it easy to decide to not add instance or virtual operator overloading to C#. If you would like to receive an email when updates are made to this post, please register here RSS?
http://blogs.msdn.com/ericlippert/archive/2007/05/14/why-are-overloaded-operators-always-static-in-c.aspx
crawl-002
en
refinedweb
by andy milliganan architect and builder The COM+ Integration and WS-AtomicTransaction hot fix package for Indigo is now available and I thought that it would be good to give it a little exercise… I’ll take a quick walk through the steps required to get a transactional client (using the very sweet System.Transactions) talking to “transactions required” COM+ component using an Indigo transport. Prepare your machines... Having installed Indigo from the WinFX Beta 1 RC, you should download and install the hotfix for your platform and language from. If you intend to use transactions with Indigo and / or Indigo’s COM+ integration feature, this is a MUST have. You then need to enable the WS-AtomicTransactions protocol in the Distributed Transaction Coordinator (MSDTC) service. Use the “xws_reg -wsat+” command to do this – chances are that this is living in the directory %WINDOWS%\Microsoft.NET\Framework\v2.0.50215. Rev your server-side code... Rather than starting from scratch, the code will be a modification of the “Integrating with a COM+ Application as an Indigo Service” from the WinFX SDK – install the SDK for the full sample solution. This gives an Enterprise Services / COM+ component and to turn it into the component that we need we can just add the TransactionAttribute set to TransactionOption.Required: [Guid("BE62FF5B-8B53-476b-A385-0F66043049F6")] [ProgId("ServiceModelSample.esCalculator")] [System.EnterpriseServices.TransactionAttribute( System.EnterpriseServices.TransactionOption.Required)] // Supporting implementation for the ICalculator interface. public class esCalculatorService : ServicedComponent, ICalculator Clearly in a more realistic example I would present some code that hooked up with a Northwind database and performed some actual transacted resource work but the effect is much the same. Having made the component transactional, you should build it, gac it, regsvcs it and ComSvcConfig it, as described in the WinFX sample documentation. The ComSvcConfig tool will recognize that the underlying app is transactional and will provide an appropriate transactional binding. At this point you should have a full Indigo service, fronting a COM+ app, ready and able to receive those web service transactions. Rev your client-side code... Again, you can modify the code from “Integrating with a COM+ Application as an Indigo Service”. You need to update the binding to support flowing transactions and then modify the client code to create a transaction. Edit the client App.config file so that we define and use a new binding configuration. As you can see this is really just enabling a flowTransactions attribute: <system.serviceModel> <client> <endpoint configurationName="CalculatorEndpoint" address="" bindingSectionName="wsProfileBinding" bindingConfiguration="comTransactionalBinding" contractType="ICalculator" /> </client> <bindings> <wsProfileBinding> <binding configurationName="comNonTransactionalBinding" reliableSessionEnabled="true" /> <binding configurationName="comTransactionalBinding" flowTransactions="Allowed" reliableSessionEnabled="true" /> </wsProfileBinding> </bindings> </system.serviceModel> For the client code modification, you need to add the namespace and reference (using System.Transactions) then modify the Main. The modified code below will start a new transaction scope of the require type then call the Add operation automatically flowing the transaction as directed by the config file. It will pause for a little breather and then, within the same transaction scope and if all went well, then the transaction will commit (or it would if it had done any useful work). static void Main() { // Start executing within a new transaction Console.WriteLine("Creating transaction scope"); TransactionOptions tOpt = new TransactionOptions(); using (TransactionScope tx = new TransactionScope(TransactionScopeOption.RequiresNew, tOpt, EnterpriseServicesInteropOption.Full)) { // Create a proxy with given client endpoint configuration using (CalculatorProxy proxy = new CalculatorProxy("CalculatorEndpoint")) { // Call the Add service operation. double value1 = 100.00D; double value2 = 15.99D; double result = proxy.Add(value1, value2); Console.WriteLine("Add({0},{1}) = {2}", value1, value2, result); // Take a breather half way through... // You can see the transaction in the server's DTC Transaction List Console.WriteLine("\nPerformed within transaction {0}", Transaction.Current.TransactionInformation.DistributedIdentifier.ToString()); Console.WriteLine("Press <ENTER> to continue"); Console.ReadLine(); // Call the Subtract service operation. value1 = 145.00D; value2 = 76.54D; result = proxy.Subtract(value1, value2); Console.WriteLine("Subtract({0},{1}) = {2}", value1, value2, result); // Complete the transaction Console.WriteLine("\nCommitting transaction {0}", tx.Complete(); // Close the proxy. proxy.Close(); } } Console.WriteLine(); Console.WriteLine("Press <ENTER> to terminate client."); Console.ReadLine(); } As the inline comment says, do check the DTC Transaction List to validate that a transaction is underway. Things to note: As always, feedback welcome. Some transaction related sample code using Indigo is shown at PingBack from
http://blogs.msdn.com/distilled/archive/2005/06/23/431821.aspx
crawl-002
en
refinedweb
Topic Last Modified: 2007-06-27 What permissions do I need to be able to create and delete Exchange Server 2003 users? If you are responsible for both user and mailbox management, you need to have permissions to create a user object in Active Directory. For example, you could be a Domain Admin, Account Operator, or you might have delegated access to a specific organization unit. In addition, you need the following Exchange permission: If you are responsible for mailbox-enabling users post-account creation, you can use a reduced set of permissions (in addition to the Exchange View Only Administrator). Additionally, if you manage public folder objects, it is recommended that the administration account (that is, the account that you log on as when you manipulate objects in the Exchange System Manager) is mail-enabled or mailbox-enabled. In some cases, odd behavior in the permissioning user interface, as well as display name resolution errors may occur if the account administering public folder objects is not mailbox or mail-enabled. For more information, see "Other Problems" under the topic "Troubleshooting and Repairing Exchange Server 2003 Store Problems" in Working with Exchange Server 2003 Stores (). What permissions do I need to be able to modify a user object's mailbox rights? For an Exchange Administrator to properly modify a user or inetOrgPerson object's mailbox rights by means of the Mailbox Rights button on the Exchange Advanced tab of the Active Directory Users and Computers snap-in, you must have the following rights: For more information about modifying mailbox rights, see Microsoft Knowledge Base article 330475 "You need full mailbox access to change mailbox rights after you install Exchange 2000 SP3." This behavior was changed in Service Pack 2 for Exchange 2000 Server. However, an issue arose in Service Pack 3; therefore, to fully administer mailbox rights, you should install at least the Exchange 2000 Server Post-SP3 Rollup. For more information about the Post-SP3 release, see Microsoft Knowledge Base article 836488, "April 2004 Exchange 2000 Server post-Service Pack 3 update rollup." What permissions do I need to be able to move a mailbox between Exchange mailbox stores? The move mailbox functionality accessible from the Active Directory Users and Computers snap-in logs onto the source mailbox and moves the folders and messages to the destination mailbox. You can move mailboxes between mailbox stores in the same storage group, across different storage groups on the same server, between Exchange servers in the same administration group, or between Exchange servers in different administrative groups (Exchange Organization must be at Native Mode). You will need to have permissions on the user object in Active Directory to modify its Exchange mailbox attributes; a user who is an Account Operator will have these permissions. Additionally, you will require: Why does Exchange View Only Administrator require the right to create objects in the global namespace for Exchange Server 2003 Service Pack 1? When Service Pack 1 (SP1) for Exchange Server 2003 is running on Windows 2000 Server SP4 (or later) or Windows Server 2003, Exchange System Manager creates objects in the computer's global namespace. As a result, any administrator who is using Exchange System Manager must have the Create Global Objects right (SE_CREATE_GLOBAL_NAME) on the server. By default, local administrator accounts have this right. However, if the user's account has Exchange View Only Administrator rights but does not have local administrator rights on the computer, the user receives an error. This situation typically occurs when a user uses Terminal Services to access Exchange System Manager on another server for which the user is not a local administrator. To avoid this error, you can either add the user to the local administrators group, or you can grant the Create Global Objects right to the user. To grant this right, log on to the local computer using an account that is a member of the Administrator's group, and then grant this right to the user account in Local Security Settings. The Create Global Objects right does not exist in Windows 2000 Server SP3 (or earlier) or Windows XP. For these operating systems, no action is necessary. What permissions do I need to be able to move a mailbox from Exchange Server 5.5 to Exchange 2000 Server or Exchange Server 2003 in the same site or administrative group? When you use Active Directory Users and Computers, the mailbox is transferred between the two servers, and then the current credentials are used to update the Home-MTA and Home-MDB attributes on the user and mailbox object. You need the following permissions: What permissions do I need to create a new administrative group? To create a new administrative group, you need to be logged on to Active Directory as a user with the following permission: What permissions do I need to run the Active Directory Account Cleanup Wizard (ADclean.exe)? The administrator who uses ADclean.exe needs the following permissions in Active Directory: ADclean.exe modifies almost all of the attributes on the target object; therefore, it is recommended that the administrator who runs the tool be a member of the Domain Admins group of the target domain. What permissions do I need to be able to create a new Mailbox/Public Folder Store and/or Storage Group? You need to be logged on with the following permissions: What permissions do I need to be able to look at the Status node in the Exchange System Manager and perform Message Tracking? You need at least the Exchange View Only Administrator role to the Administrative Group(s) where tracking will take place. In large, multi-Administrative Group installations, it is recommended to give message tracking staff the Exchange View Only Administrator role at the Organization level because a single message track may include servers from any Administrative Group. Be aware that in Exchange 2000 Server 'Everyone' has read permissions to the message tracking log share on each Exchange server. If the administrator enables subject line logging, user data may be exposed. However, in Exchange Server 2003, the permissions on the message tracking log share have been restricted to the local Administrators group by default. For an added layer of security, you should further restrict the message tracking log by creating a security group of authorized personnel and allowing only that security group read access to the logs. If a user wants to perform message tracking on an Exchange server that is running Windows Server 2003, but the user does not have administrative permissions, you must grant the following WMI permissions to the account: For more information about how to grant WMI permissions to user accounts or groups that will perform message tracking, see How to Grant WMI Permissions for Message Tracking. What permissions do I need to be able to configure routing groups and connectors? When you configure routing groups and connectors, you are not directly affecting user account objects; therefore, you only need the following permission: If you need to define the global message formats for specific outbound domains or need to specify global message thresholds, you need the following permission: What permissions do I need to start and stop an Internet Protocol virtual server (for example, Simple Mail Transfer Protocol)? You need either of the following permissions: What permissions do I need to view the Current Sessions on an SMTP virtual server? To view the current SMTP sessions you need the following permissions: What permissions do I need to be able to manipulate message queues? To view the message queues in the Exchange System Manager, you need the following permissions: To remove messages from queues, you need either of the following permissions: What permissions do I need to create, manage, and delete content indices? To manage content indices, you need the following permissions: What permissions do I need to apply a system policy? To apply system policies, you need the following permissions:
http://technet.microsoft.com/en-us/library/bb124053(EXCHG.65).aspx
crawl-002
en
refinedweb
The. The data model (data and namespace) of LDAP is similar to that of the X.500 OSI directory service, but with lower resource requirements. The associated LDAP API simplifies writing Internet directory service applications. The LDAP API is applicable to directory management and browser applications that do not have directory service support as their primary function. Conversely, LDAP is neither applicable to creating directories, nor specifying how a directory service operates.. Client applications that use the LDAP API, run on Windows Vista, Windows XP, and Windows 2000. All platforms must have TCP/IP installed. Active Directory servers that support client applications using the LDAP API include Windows Server 2008, Windows Server 2003, and Windows 2000 Server. About LDAP General information about the Lightweight Directory Access Protocol API. Using LDAP Programmer's guide to using the Lightweight Directory Access Protocol API. LDAP Reference Reference information for LDAP. Send comments about this topic to Microsoft Build date: 7/7/2009
http://msdn.microsoft.com/en-us/library/aa367008(VS.85).aspx
crawl-002
en
refinedweb
Topic Last Modified: 2004-06-08 The namespace defines the majority of fields used to set configurations for various CDO objects. These configuration fields are set using an implementation of the IConfiguration.Fields collection. Many CDO objects use information stored in an associated Configuration object to define configuration settings. One example is the Message object, where you use its associated Configuration object to set fields such as sendusing. This field defines whether to send the message using the local SMTP service drop directory (if the local machine has the SMTP service installed), an SMTP service directly over the network. If sending over the network, you set smtpserver to specify the IP address or DNS name of the machine hosting the SMTP service, and optionally, smtpserverport to specify a port value. If credentials are required for connecting to an SMTP service, you can specify them by setting the sendusername and sendpassword. A similar set of fields exists for posting messages using either a local NNTP service pickup directory, or over the network. All of the names listed below are also defined as string constants in the type library and IDL file for convenience. In many cases, you may not wish to explicitly set the configuration for a particular object. The CDO component provides default values for the fields that depend on the software installed on the system. With Outlook Express installed on the system, various fields default to the settings associated with the default Outlook Express mail account for the default identity. The possible fields are numerous and not listed here. Check the associated field below for the associated Outlook Express value used as a default. In the case where both Outlook Express and SMTP/NNTP services exist on the same machine, both default settings are loaded. The sendusing and postusing fields, however, each default to 1, meaning to use the associated pickup directories for the installed services, if that service is installed. Dim iConfig as new CDO.Configuration Dim Flds as ADODB.Fields iConfig.Load cdoSourceIIS Set Flds = iConfig.Fields With Flds ' Its good practice to use the module constants defined in the ' type library for the names. The full names are used here to ' indicate that this is what is going on .Item("") = "mail.example.com" .Item("") = 25 .Item("") = CdoSendUsingPort .Update End With #import "c:\program files\common files\system\ado\msado15.dll" no_namespace #import <cdosys.dll> no_namespace #include <cdosysstr.h> // string constants in this file CoInitialize(NULL); { IConfigurationPtr iConf(__uuidof(Configuration)); FieldsPtr iFields; iConf->Load(cdoSourceIIS); // this string constant from import Flds = iConf->Fields; Flds->Item[""]->Value = _variant_t("mailserver"); Flds->Item[""]->Value = _variant_t((long)25); Flds->Item[""]->Value = _variant_t((long)cdoSendUsingPort); Flds->Update(); // ... } CoUninitialize(); }
http://msdn.microsoft.com/en-us/library/ms526318%28EXCHG.10%29.aspx
crawl-002
en
refinedweb
Nigel Thompson Microsoft Developer Network Technology Group March 20, 1995 This technical article is the second in a series that describes creating and using 32-bit Component Object Model (COM) objects with Visual C++™ and the Microsoft® Foundation Class Library (MFC). This article describes how COM object interfaces are used to control the objects themselves. In "MFC/COM Objects 1: Creating a Simple Object," we looked at creating a simple COM object and using that object from inside an application. The object we created was a simple light bulb, which supported an interface that allowed the application using the object to tell the object to draw itself. I called that interface IDrawing. The application didn't need to know anything about the light bulb other than the fact that it supported the IDrawing interface. Because the application (the house) knew how to use the IDrawing interface, it was able to show the light bulb by calling appropriate functions in the light bulb's IDrawing interface. What I'm going to do next is introduce some other objects: a standard lamp, a TV, and a radio. These new objects will all support the IDrawing interface, so the house will be able to show them in the rooms they occupy. We're also going to define some new interfaces and show how the house can use these new interfaces to control one of the objects (the lamp, TV, or radio) without knowing exactly what kind of object it's actually dealing with. What's the point of this? The idea is that at some point in the future, new objects can be added to the house, and the house will be able to control those new objects without any changes in the house code, providing the new objects support the interfaces the house understands. So the house code becomes somewhat future-proof as new appliances are invented. All appliances are not created equal. Some appliances are very simple and can only be turned on and off. Lights can be on or off, and some lights can be set at intermediate levels of brightness. And some appliances, like TVs or radios, can be off or on, but also can be on a particular station, and so on. It's obviously possible to design an interface that deals with off and on. It's reasonably easy to see how to design an interface that also deals with brightness, but it's not easy to see how to design the totally universal remote control that would be needed to control all the TVs, radios, and stereos in the world. It's not even very practical to try to design a future-proof TV or radio controller, so how are we going to produce a generic interface that the house can use for TVs and radios? We're not. We're going to let the appliance itself do that, and the house will simply ask the appliance if it has some fancy interface of its own that it can show to the user. So when the house wants to control an appliance, it first asks the appliance if it has its own user interface, and if so, it tells the appliance to show the control panel to the user. If the appliance isn't that smart, the house tries to see if the appliance has a brightness-control interface. If it does, the house can show a simple dimmer-type control, and use the brightness interface in the appliance to set the light level in response to what the user does with the dimmer. And as a last resort, the house can see if the appliance supports the on-off interface, and if so, the house can show the user a simple switch and use the appliance's on-off interface to control it. The interfaces that the house understands are as follows: All appliances support one or more of these interfaces. Let's see how the house uses these when the user double-clicks an object: void CMainFrame::OnLButtonDblClk(UINT nFlags, CPoint point) { if(m_pSelectRect == NULL) return; // No selection // Get the object's IUnknown interface pointer. IUnknown* pIUnknown = m_pAppliance[m_iSelect]; ASSERT(pIUnknown); // See if it supports the IApplianceUI interface. IApplianceUI* pIApplianceUI = NULL; if (pIUnknown->QueryInterface(IID_IApplianceUI, (LPVOID*)&pIApplianceUI) == S_OK) { // Put up the interface. pIApplianceUI->ShowControl(this); pIApplianceUI->Release(); return; } // See if it supports the ILight interface. ILight* pILight = NULL; if (pIUnknown->QueryInterface(IID_ILight, (LPVOID*)&pILight) == S_OK) { // Put up the interface. CLightDlg* pDlg = new CLightDlg; pDlg->m_pILight = pILight; pDlg->m_pParent = this; pDlg->Create(); pILight->Release(); return; } // See if it supports the IOutlet interface. IOutlet* pIOutlet = NULL; if (pIUnknown->QueryInterface(IID_IOutlet, (LPVOID*)&pIOutlet) == S_OK) { // Put up the interface. COutletDlg* pDlg = new COutletDlg; pDlg->m_pIOutlet = pIOutlet; pDlg->m_pParent = this; pDlg->Create(); pIOutlet->Release(); return; } // Pretty much a dead loss, this one. AfxMessageBox("This appliance is not controllable"); } First of all, the house gets a pointer to the object's IUnknown interface. This pointer was saved when the object was first created and inserted in the house. Then the object's IUnknown::QueryInterface function is called to see if the object supports the IApplianceUI interface. If it does, the object is asked to show its own control. Figure 1 shows the radio's controller. Figure 1. The radio's controller If IApplianceUI is not supported, the house tries for the ILight interface. If this is supported by the object, the house puts up a slider control like the one shown in Figure 2. Figure 2. The house's light controller When the user moves the slider, the house uses the object's ILight::SetBrightness function to set the new light level. If the object doesn't support ILight, the house finally tries for the IOutlet interface, and if it finds the interface, it uses a control like the one shown in Figure 3 to allow the user to switch the appliance on or off. Figure 3. The house's on-off controller If you've played with the HOUSE2 sample, you'll have discovered that the lights do turn on and off, and the radio plays a tune. That's what we wanted to happen, of course, but there's a part of the story that I've skipped. Let's say you double-clicked on a light bulb and got a controller like the one shown in Figure 3. You click the On button, and the house responds by calling the object's IOutlet::On function. The object then sets its own state to be ON. Fine and dandy, but how does the image of the light in the house change to show the new state? The object can't just redraw itself because it can't know how the house is rendering its own image. For example, the house may be using an off-screen bitmap to compose changes in its own image. In order to show the new state of the light bulb, the house must ask the light bulb to draw itself to the off-screen buffer (or whatever) and then make these changes show on the screen. In the sample shown here, I cheated to make the light bulb's visible state change. When you click the buttons in the Outlet dialog box, the dialog code sends a message to the house's main window, asking it to repaint itself. As a result of this message, the house will ask the light bulb to draw itself, and so you'll see the change of state. When you play with the radio's buttons, the radio itself starts the tune playing. There is no visible change of state in the house—perhaps you hadn't noticed that! What we have so far is a push-only system. The application using the COM objects is telling them what to do, but isn't providing a path for information from the objects to the application. So currently there is no way for an appliance to tell the house that its state has changed and the house should redraw it. Another side effect of this push-only model is that if you double-click an object twice, so as to bring up two instances of its controller, you'll find that the controllers don't know about each other. So (for example) if you have two sliders controlling the same light, moving one slider doesn't move the other one—it just changes the state of the light. What we really need is a way for users of an object to be notified of changes of state in the object. When an object changes state, its users may indicate that change by either redrawing the object or showing a control in a new position. We'll be looking at how to implement this in the next article in the series, "MFC/COM Objects 3: Objects That Talk Back." The COM objects we created in the previous article, "MFC/COM Objects 1: Creating a Simple Object," had only a single interface: IDrawing. Let's look at what's required to support several interfaces in a single object. We'll look at how the standard lamp that implements IDrawing, IOutlet, and ILight was done. Let's begin by looking at what got added to the header file to define the new interfaces. Here's part of the STANDARD.H file showing all the interface definitions: class CStandardLamp : public CCmdTarget { [...] // Declare the interface map for this object. DECLARE_INTERFACE_MAP() // IDrawing interface BEGIN_INTERFACE_PART(Drawing, IDrawing) STDMETHOD(Draw)(CDC* pDC,int x, int y); STDMETHOD(SetPalette)(CPalette* pPal); STDMETHOD(GetRect)(CRect* pRect); END_INTERFACE_PART(Drawing) // IOutlet interface BEGIN_INTERFACE_PART(Outlet, IOutlet) STDMETHOD(On)(); STDMETHOD(Off)(); STDMETHOD(GetState)(BOOL* pState); END_INTERFACE_PART(Outlet) // ILight interface BEGIN_INTERFACE_PART(Light, ILight) STDMETHOD(SetBrightness)(BYTE bLevel); STDMETHOD(GetBrightness)(BYTE* pLevel); END_INTERFACE_PART(Light) [...] }; Notice that all that's needed is a declaration of each interface supported. The BEGIN_INTERFACE_PART and END_INTERFACE_PART macros are supplied by the Microsoft® Foundation Class Library (MFC). The STDMETHOD macro is supplied by the OLE libraries (which define COM objects). The implementation is a little bit more involved because every interface must support the IUnknown interface functions. MFC provides almost everything needed to support IUnknown in your own interfaces, but you still need to write a small amount of code to implement AddRef, Release, and QueryInterface. If you refer to the first article in this series ("MFC/COM Objects 1: Creating a Simple Object"), you'll see that I included code for these functions in the IDrawing interface. We need to include almost exactly the same code in the IOutlet and ILight interfaces. In fact, it's so similar that I gave in and used a macro to avoid repeatedly typing the same code. The macro is called IMPLEMENT_IUNKNOWN and can be found in the IMPIUNK.H file. Please note that although this sounds a lot like an MFC macro name, it is not an MFC macro. Here it is: #ifndef IMPLEMENT_IUNKNOWN #define IMPLEMENT_IUNKNOWN_ADDREF(ObjectClass, InterfaceClass) \ STDMETHODIMP_(ULONG) ObjectClass::X##InterfaceClass::AddRef(void) \ { \ METHOD_PROLOGUE(ObjectClass, InterfaceClass); \ return pThis->ExternalAddRef(); \ } #define IMPLEMENT_IUNKNOWN_RELEASE(ObjectClass, InterfaceClass) \ STDMETHODIMP_(ULONG) ObjectClass::X##InterfaceClass::Release(void) \ { \ METHOD_PROLOGUE(ObjectClass, InterfaceClass); \ return pThis->ExternalRelease(); \ } #define IMPLEMENT_IUNKNOWN_QUERYINTERFACE(ObjectClass, InterfaceClass) \ STDMETHODIMP ObjectClass::X##InterfaceClass::QueryInterface(REFIID riid, LPVOID* ppVoid) \ { \ METHOD_PROLOGUE(ObjectClass, InterfaceClass); \ return (HRESULT)pThis->ExternalQueryInterface(&riid, ppVoid); \ } #define IMPLEMENT_IUNKNOWN(ObjectClass, InterfaceClass) \ IMPLEMENT_IUNKNOWN_ADDREF(ObjectClass, InterfaceClass) \ IMPLEMENT_IUNKNOWN_RELEASE(ObjectClass, InterfaceClass) \ IMPLEMENT_IUNKNOWN_QUERYINTERFACE(ObjectClass, InterfaceClass) #endif // IMPLEMENT_IUNKNOWN Now that we have the macro, we can look at how the standard lamp's interfaces are implemented with a bit less clutter. The first addition is to the interface map: BEGIN_INTERFACE_MAP(CStandardLamp, CCmdTarget) INTERFACE_PART(CStandardLamp, IID_IDrawing, Drawing) INTERFACE_PART(CStandardLamp, IID_IOutlet, Outlet) INTERFACE_PART(CStandardLamp, IID_ILight, Light) END_INTERFACE_MAP() We have simply added entries for the IOutlet and ILight interfaces. I'm not going to show you the implementation of IDrawing because that's unchanged. Let's look at how IOutlet is implemented: ///////////////////////////////////////////////////////// // IOutlet interface // IUnknown for IOutlet IMPLEMENT_IUNKNOWN(CStandardLamp, Outlet) // IOutlet methods STDMETHODIMP CStandardLamp::XOutlet::On() { METHOD_PROLOGUE(CStandardLamp, Outlet); pThis->m_bLevel = 255; return NOERROR; } STDMETHODIMP CStandardLamp::XOutlet::Off() { METHOD_PROLOGUE(CStandardLamp, Outlet); pThis->m_bLevel = 0; return NOERROR; } STDMETHODIMP CStandardLamp::XOutlet::GetState(BOOL* pState) { METHOD_PROLOGUE(CStandardLamp, Outlet); if (!pState) return E_INVALIDARG; *pState = (pThis->m_bLevel > 0) ? TRUE : FALSE; return NOERROR; } Yes, that's the entire thing. The IMPLEMENT_IUNKNOWN macro saves a lot of clutter. Note that each of the methods must include the METHOD_PROLOGUE macro, which provides access to the member of the containing class (CStandardLamp) and its pThis member. The actual implementation of the functionality of the On, Off, and GetState functions is trivial. Implementation of the ILight interface is even simpler: // IUnknown for ILight IMPLEMENT_IUNKNOWN(CStandardLamp, Light) // ILight methods STDMETHODIMP CStandardLamp::XLight::SetBrightness(BYTE bLevel) { METHOD_PROLOGUE(CStandardLamp, Light); pThis->m_bLevel = bLevel; return NOERROR; } STDMETHODIMP CStandardLamp::XLight::GetBrightness(BYTE* pLevel) { METHOD_PROLOGUE(CStandardLamp, Light); if (!pLevel) return E_INVALIDARG; *pLevel = pThis->m_bLevel; return NOERROR; } Adding interfaces to an existing COM object is quite simple. For an application to use a new interface, it is required only to understand the interface; it need have no knowledge of the nature of the actual object that supports it. You might try adding a new appliance to the sample given here and see if the house can control it correctly. If your new appliance is to have its own user interface, take a look at how the radio was implemented.
http://msdn.microsoft.com/en-us/library/ms809987.aspx
crawl-002
en
refinedweb
Many components are fairly simple. They might consist of a single class with a simple object model or no object model. Other components are more complex, and might need to contain and manage a large number of subordinate objects. Nested classes are one way for complex components to contain and manage the objects they need. A nested class is a class that is fully enclosed within another class declaration. For example, a nested class and an enclosing class might look like the following example: ' Visual Basic ' This is the enclosing class, whose class declaration contains the nested ' class. Public Class EnclosingClass ' This is the nested class. Its class declaration is fully contained ' within the enclosing class. Public Class NestedClass ' Insert code to implement NestedClass. End Class ' Insert code to implement EnclosingClass. End Class // C# // This is the enclosing class, whose class declaration contains the // nested class. public class EnclosingClass { // This is the nested class. Its class declaration is fully contained // within the enclosing class. public class NestedClass { // Insert code to implement NestedClass. } // Insert code to implement EnclosingClass. } In this example, the class declaration for NestedClass is fully contained by the class declaration of EnclosingClass. As a result of being contained within the enclosing class, the nested class gains a certain level of protection. Unless you use the Imports (using in C#) statement, all references to the nested class must be qualified with the name of the containing class. For example, to instantiate the nested class in the previous example, you would have to use the following syntax: ' Visual Basic Dim aClass as New EnclosingClass.NestedClass() // C# EnclosingClass.NestedClass aClass = new EnclosingClass.NestedClass(); The access level of the nested class is implicitly at least the access level of the enclosing class. Even if the nested class is Public, if the enclosing class is Friend (internal in C#), then only members of the assembly will be able to access the nested class, and if the enclosing class is Private, then the nested class will be unavailable to all callers except the enclosing class. Assuming a Public enclosing class, the access level of the nested classes pretty much follows the same rules as for access of unnested classes. Friend classes are available to members of the assembly, but not external clients. Private classes are available to the enclosing class, other nested classes within the enclosing class, and any classes nested within other nested classes. Nested classes are useful when an object will logically contain subordinate objects, but other objects would have use for those objects. An example might be a Wheel class. This could be a class that clients could create and use wherever a wheel might be needed in their application. But except in the most primitive implementations, a wheel is not just a single object, but is composed of several subordinate objects, each of which build the wheel. A wheel might have a Rim object, Tire object, Spoke objects and other objects, without which the wheel could not function. But the average user would have no need to create a spoke, or a rim, or a bearing — all he's interested in is the wheel. In a case like this, it makes sense for the Wheel class to contain the implementation for all of its subordinate classes. This way, the wheel can create and manage any contained objects it may need while conveniently hiding the details of this implementation from the client. Those subordinate objects the client might reasonably need to have contact with now and again (for example, a Tire object) can be exposed as part of the public object model, and those that a client should never see (for example, a Bearings collection) could be declared private and hidden. Recommendations on Nested Classes in Components | Implementing Nested Classes | Components that Contain Other Components
http://msdn.microsoft.com/en-us/library/cbwxw0ye(VS.71).aspx
crawl-002
en
refinedweb
.adm file A system policy template file that defines the system policies and restrictions that you can set for the desktop, shell, and/or system security. See also System Policy. .cab file See cabinet (.cab) file. .inf file See information (.inf) file. .ins file See Internet settings (.ins) file. Active. ActiveX control A reusable software component based on Microsoft's ActiveX technology that is used to add interactivity and more functionality, such as animation or a popup menu, to a Web page, applications, and software development tools. add-on component A component that is not included in your package, but is one that your users can install after they complete Windows Update Setup for Internet Explorer 6 and Internet Tools. address In reference to the Internet, the name of a site that users can connect to, such as, or the address of an e-mail recipient, such as name@example.microsoft.com. A typical address starts with a protocol name (such as ftp:// or http://) followed by the name of the organization that maintains the site. The suffix identifies the kind of organization. For example, commercial site addresses often end with .com. answer file A text file that scripts the answers for a series of graphical user interface (GUI) dialog boxes. The answer file for Setup, for example, automates the setup process. You can create or modify an answer file in a text editor or through Setup Manager. See also unattended Setup. attached behavior A behavior that binds asynchronously to a standard HTML element either through a CSS declaration of the behavior property or procedurally through the addBehavior and removeBehavior methods. Attached behaviors overwrite the default behavior of the element to which they are attached. See also element behavior. Authenticode A technology that makes it possible to identify who has published a piece of software and verify that it has not changed since publication. automatic configuration A process that lets corporate administrators manage and update user settings, system policies, and restrictions for Microsoft Internet Explorer from a central location. A pointer to an automatic-configuration file can be manually set within the browser or by configuring the browser installation using the IEAK. automatic detection A feature in the IEAK, based on Web Proxy AutoDiscovery (WPAD), that enables automatic configuration and automatic proxy to work when a user connects to a network the first time. With automatic detection turned on, the browser is automatically configured when it is started, even if the corporate administrator did not customize the browser. See also automatic configuration; automatic proxy; Web Proxy AutoDiscovery (WPAD). automatic image resizing The automatic resizing of larger pictures so that they fit within the dimensions of the browser window. automatic proxy A feature that allows an administrator to configure Internet Explorer so that the browser determines dynamically whether to connect directly to a host or to use a proxy server. automatic search A feature of Internet Explorer that enables users to type a. AVS See Automatic Version Synchronization (AVS). cabinet (.cab) file A single file that stores multiple compressed files. These files are commonly used in software installation and to reduce the file size and the associated download time for Web content. cache An area on the hard disk reserved for storing images, text, and other files that the user previously viewed on the Internet. certificate See digital public key. CMAK See Connection Manager Administration Kit (CMAK). code signing The process of signing a completed Internet Explorer package with a digital certificate. Signing the package requires two steps: obtaining a digital certificate and signing the code. See also digital certificate. Connection Manager A client dialer used to obtain Internet access. It can be customized with the Connection Manager Administration Kit (CMAK). Connection Manager Administration Kit (CMAK) A tool for customizing the appearance and functionality of the Connection Manager. cookie A small file that an individual Web site stores on your computer. Web sites can use cookies to maintain information and settings, such as your customization preferences. corporate administrator An individual who is responsible for setting up and maintaining computers and applications across a corporation. Administrators also manage user and group accounts, assign passwords and permissions, and help users with networking issues. custom element In an HTML document, a user-defined element that has explicit namespaces. Customization Wizard See Internet Explorer Customization Wizard. data binding The process of associating the objects or controls of an application to a data source, such as a database field. The contents of a control associated with a data source are associated with values from a database. DHCP See Dynamic Host Configuration Protocol (DHCP). DHTML See Dynamic HTML (DHTML). means for originators of a message, file, or other digitally encoded information to bind their identity to the information. The process of digitally signing information entails transforming the information, as well as some secret information held by the sender, into a tag called a signature. DNS See Domain Name System (DNS). DNS server A computer maintained by an ISP that matches IP addresses to host names. Some ISPs provide a specific DNS address. Document Object Model A World Wide Web Consortium specification that describes the structure of Dynamic HTML and XML documents in a way that allows them to be manipulated through a Web browser.. DUN See Dial-Up Networking (DUN). Dynamic Host Configuration Protocol (DHCP) A TCP/IP protocol that enables a network connected to the Internet to assign a temporary IP address to a host automatically when the host connects to the network. See also Transmission Control Protocol/Internet Protocol (TCP/IP); IP address. Dynamic HTML (DHTML) A collection of features that extends the capabilities of traditional HTML, giving Web authors more flexibility, design options, and creative control over the appearance and behavior of Web pages. See also Hypertext Markup Language (HTML). element behavior A behavior that binds to a standard HTML element such that it can never be detached; it is considered an intrinsic part of the element being defined. Element behaviors are used to define new elements. See also attached behavior. encryption A method for making data indecipherable to protect it from unauthorized viewing or use. Explorer bar The left side of the browser where the Search, History, and Favorites lists appear when the user clicks the corresponding buttons on the toolbar. The user can also create a custom Explorer bar, as well as a custom toolbar button to open it. Favorites Predefined links to Web sites. Favorites are also known as "bookmarks." Favorites in Internet Explorer can be configured to automatically notify the user when content changes. gateway A connection or interchange point that connects two networks that otherwise would be incompatible. Group Policy. hands-free installation A configuration of Windows Update Setup for Internet Explorer and Internet Tools in which users are not prompted to make decisions but are informed of the installation progress and errors. This option is available only to corporate administrators. See also silent installation. The first page that users see when they start Internet Explorer. Also, the main page of a Web site, which usually contains a main menu or table of contents with links to other pages within the site. HTML See Hypertext Markup Language (HTML). HTML+TIME See HTML+Timed Interactive Multimedia Extensions (TIME). HTML+Timed Interactive Multimedia Extensions (TIME) An Internet Explorer feature that adds timing, media synchronization, and animation support to Web pages. Hypertext Markup Language (HTML) A simple markup language used to create and design Web pages. HTML files are simple ASCII text files with codes embedded (indicated by markup tags) to denote formatting and hypertext links. ICP See Internet Content Provider (ICP). IEAK See Internet Explorer Administration Kit (IEAK). IIS See Internet Information Services (IIS). IMAP See Internet Message Access Protocol (IMAP). IMAP server A server that uses IMAP to provide access to multiple server-side folders. See also Internet Message Access Protocol (IMAP); POP3 server. independent software vendor (ISV) A third-party software developer; an individual or an organization that independently creates computer software. information (.inf) file A file that provides Windows Update Setup for Internet Explorer and Internet Tools with the information required to set up a device or program. The file includes a list of valid configurations, the name of driver files associated with the device or program, and so on.. Integrated Windows Authentication A secure authentication method that uses a cryptographic exchange between a client and a server rather than transmitting a user name and a password to determine the client's authentication. Information Services (IIS) Software services that support Web site creation, configuration, and management, along with other Internet functions. Internet Information Services include Network News Transfer Protocol (NNTP), File Transfer Protocol (FTP), and Simple Mail Transfer Protocol (SMTP). See also Network News Transfer Protocol (NNTP). Internet Message Access Protocol (IMAP)). Internet Protocol (IP) A routable protocol in the TCP/IP protocol suite that is responsible for IP addressing, routing, and the fragmentation and reassembly of IP packets. See also Transmission Control Protocol/Internet Protocol (TCP/IP). Internet Protocol address (IP address) A 32-bit binary number used to identify a node on an IP internetwork. Each node must be assigned a unique IP address, which is made up of the network ID, plus a unique host ID. This address consists of the decimal values of its 4 bytes, separated with periods (for example, 192.168.7.27). Internet service provider (ISP) An organization that maintains a server directly connected to the Internet. Users who are not directly connected to the Internet typically connect through a service provider. To acquire these connections, users call the provider and set up an account. Internet settings (.ins) file A file that provides Windows Update Setup for Internet Explorer and Internet Tools with Internet settings that configure the browser and associated components. You can create multiple versions of your browser package by changing the .ins file used by each package. Use the Profile Manager to create, save, and load .ins files. IP address See Internet Protocol address (IP address). ISP See Internet service provider (ISP). ISV See independent software vendor (ISV). Kerberos authentication A protocol that provides a mechanism for mutual authentication between a client and a server before a network connection is opened between them. The protocol assumes that initial transactions between clients and servers take place on an open network. kiosk mode A browser mode in which the browser toolbar and menu bar are not displayed. lab A collection of non-production machines used to test an Internet Explorer package. The lab is not the same as a pilot group. LDAP See Lightweight Directory Access Protocol (LDAP). Lightweight Directory Access Protocol (LDAP) An open standard for storing and retrieving people's names, e-mail addresses, phone numbers, and other information. lightweight HTML component An HTML component in which the lightweight attribute is signified for the PUBLIC:COMPONENT element. Because the .htc files for this component contain no HTML content or contain static HTML content that is ignored, the HTML document is less complex. Media bar In Internet Explorer, an Explorer bar that provides a simple user interface for locating and playing media within the browser window. MIME See Multipurpose Internet Mail Extensions (MIME). MSDN Microsoft Developer Network. Multipurpose Internet Mail Extensions (MIME) A standard that extends SMTP to allow the transmission of such data as video, sound, and binary files across the Internet without translating the data into ASCII format. See also SMTP (Simple Mail Transfer Protocol). namespace A collection of names that are used to uniquely qualify elements.. See also Transmission Control Protocol/Internet Protocol (TCP/IP). NNTP See Network News Transfer Protocol (NNTP). Parental Internet Content Selection (PICS) Rules that enable Web content providers to use meta tags to voluntarily rate their content according to agreed-upon PICS criteria. A browser can then block user access to Web sites based on the values of the tags. PICS See Parental Internet Content Selection (PICS). platform A type of client, such as Windows 2000, Windows NT 4.0, Windows Millennium Edition, Windows 98, Windows 3.x, Macintosh, or UNIX. policy file A file that defines system policies and restrictions. See also system policies and restrictions. (Simple Mail Transfer Protocol); Internet Message Access Protocol (IMAP). POP3 server A server that provides access to a single Inbox. See also IMAP server. private key The secret half of a cryptographic key pair that is used with a public key algorithm. Private keys are typically used to decrypt a symmetric session key, digitally sign data, or decrypt data that has been encrypted with the corresponding public key. See also public key. Profile Manager A tool in the Internet Explorer Administration Kit (IEAK) used by corporate administrators to create and dynamically manage browser and desktop automatic configuration settings. proxy A firewall and content cache server that provides Internet security and improves network performance. proxy server A server that works as a barrier between an internal network (intranet) and the Internet. Proxy servers can work with firewalls, which help keep other people on the Internet from gaining access to confidential information on the intranet. A proxy server also allows the caching of Web pages for quicker retrieval. private key. quiet mode The state in which a command-line application runs with little or no input from the user.. Resultant Set of Policy (RSoP) An IEAK snap-in that helps you plan browser policies before you deploy your custom browser packages. root certificate A self-signed certification authority certificate. See also certification authority (CA); digital certificate. RSoP See Resultant Set of Policy (RSoP). RunOnce application An application that is configured to run the next time the computer is restarted. The application does not run after any subsequent reboots of the system. sandbox In Java, an area in memory outside of which the program cannot make calls. scratch space The storage area on the client computer that an applet can safely access without needing full access to the client file system. zone In Internet Explorer, a segment of the Internet or intranet assigned a particular level of security, depending on how much the administrator trusts the content of the Web site. Security zones allow an administrator to restrict user access to certain Web sites. Seek bar A control on the Media bar that allows the user to view and change the progress of a media file while it is playing. Server Gated Cryptography (SGC) An extension of Secure Sockets Layer (SSL) that makes possible the use of 128-bit encryption. See also Secure Sockets Layer (SSL). SGC See Server Gated Cryptography (SGC). signature See digital signature. silent installation A configuration of Windows Update Setup for Internet Explorer and Internet Tools in which users are not prompted to make decisions about installation options and are not informed of the installation progress or errors. This option is available only to corporate administrators. See also hands-free installation. single-disk branding Customizing an existing installation of Internet Explorer, including Internet sign-up for ISPs, without reinstalling Internet Explorer. This option does not enable you to package and install custom components. SMTP (Simple Mail Transfer Protocol) A protocol used for transferring or sending e-mail messages between servers. Another protocol (such as POP3) is used to retrieve the messages. SSL See Secure Sockets Layer (SSL). subkey An element of the registry that contains entries or other subkeys. A tier of the registry that is immediately below a key or a subtree (if the subtree has no keys). System Management Server (SMS) Systems management software that can help you automate a large-scale deployment by automatically distributing and installing your custom browser packages on users' computers. system policies and restrictions Settings, defined in a policy file, that control user and computer access privileges by overriding default registry values when the user logs on.. TCP/IP See Transmission Control Protocol/Internet Protocol (TCP/IP).). unattended Setup An automated, hands-free method of installing Windows. During installation, unattended Setup uses an answer file to supply data to Setup instead of requiring that an administrator interactively provide the answers. user-agent string Text that identifies the specific version and origin of the browser. Viewlink A feature of the DHTML behavior component model that enables you to write fully encapsulated element behaviors and then import them as custom elements in Web pages. virtual machine (VM) A program that provides an independent operating system environment within another operating system. A virtual machine permits the user to run programs that are native to a different operating system. VM See virtual machine (VM). watermark A bitmap that is displayed behind the Internet Explorer toolbar. Color the watermark so that it does not obscure the text or graphics of toolbar buttons.. Windows Desktop Update A feature included in Windows 98, Windows 98 Second Edition, Windows Millennium Edition, Windows 2000, and Windows XP that can be used to make users' desktop and folders look and work more like the Web. Windows Update Setup for Internet Explorer 6 and Internet Tools The setup program that installs Internet Explorer and other Internet components. The IEAK allows you to customize Windows Update Setup for Internet Explorer and Internet Tools to provide a better experience for your users.
http://technet.microsoft.com/en-us/library/dd346954.aspx
crawl-002
en
refinedweb
Represents a dynamic data collection that provides notifications when items get added, removed, or when the entire list is refreshed. Public Class ObservableCollection(Of T) _ Inherits Collection(Of T) _ Implements INotifyCollectionChanged, INotifyPropertyChanged Dim instance As ObservableCollection(Of T) public class ObservableCollection<T> : Collection<T>, INotifyCollectionChanged, INotifyPropertyChanged The type of elements in the collection. You can enumerate over any collection that implements the IEnumerable interface, and this is adequate<(Of <(T>)>) class, which is a provided base class data collection that implements the INotifyCollectionChanged interface, as well as the INotifyPropertyChanged interface. It also has the expected collection support, defined by deriving from the Collection<(Of <(T>)>) class. For an example, see Walkthrough: Binding to a Collection and Creating a Master/Details View. For a list of the operating systems and browsers that are supported by Silverlight, see Supported Operating Systems and Browsers.
http://msdn.microsoft.com/en-us/library/ms668604(VS.95).aspx
crawl-002
en
refinedweb
Microsoft Internet Security and Acceleration (ISA) Server 2004 provides many features for securely publishing resources to the Internet with the use of application layer filters. The purpose of this document is to detail how to more effectively use the Hypertext Transfer Protocol (HTTP) and Simple Mail Transfer Protocol (SMTP) filtering capabilities to secure a Microsoft Exchange Server 2003 environment connected to the Internet by using ISA Server 2004. ISA Server 2004 has a built-in wizard, called the New Mail Server Publishing Rule Wizard, which is designed to assist in the creation of rules required to publish mail and Exchange servers. This guide details how to use the New Mail Server Publishing Rule Wizard and additionally configure the HTTP and SMTP filters appropriately for an Exchange Server 2003 environment. On This Page This section provides a scenario for creating rules in ISA Server. A basic network design is described, which applies to the solutions in this document. The relationship between ISA Server rules and Exchange features is explained, including which features will be published in each rule. The concept of delegated authentication is explained, followed by a discussion of the Domain Name System (DNS) and certificates. There are many deployment topologies used to deploy Exchange services to the Internet. These topologies can vary in complexity, cost, and level of security. This document assumes a simple design topology (which is often the most effective). These design principles can be used in any environment where ISA Server 2004 is publishing Exchange. This document assumes that ISA Server is placed as a second layer firewall between the perimeter network (also known as DMZ, demilitarized zone, or screened subnet) and the Internal network, and is a member of the internal Active Directory directory service domain. ISA Server is very effective in this position because it adds layer 7 firewall security at the edge of the internal local area network (LAN) with good connectivity for authentication purposes. It is also assumed than a traditional layer 4 packet filtering firewall is placed between the Internet and the perimeter network. However, this is not an explicit requirement. ISA Server 2004 can also operate effectively as the only firewall layer where two layers are not required. The following network addresses are used in the examples for this document: This guide assumes that the Exchange front-end server is placed on the Internal network along with the Exchange back-end servers and not in the perimeter network. This design is a recommended scenario in the Planning an Exchange Server 2003 Messaging System guide. For additional security, it is also appropriate to place the Exchange front-end and back-end servers on a separate network with an ISA Server computer separating them from the internal LAN. This way, Exchange servers also benefit from the ISA Server remote procedure call (RPC) filter to protect the mailbox servers. For smaller Exchange deployments where only a single back-end server exists, it is not required to install an Exchange front-end server. In these scenarios, publish the Exchange back-end server through ISA Server. We still recommend using an SMTP relay in front of the back-end server. This could even be installed on the ISA Server computer along with the ISA Server Message Screener. For information about how to deploy Microsoft Office Outlook Web Access for Exchange Server 2003, see Planning an Exchange Server 2003 Messaging System at the Microsoft TechNet Web site. Various rules are required to provide access to all the features of Exchange. The New Mail Server Publishing Rule Wizard can bundle many of the features into a single rule. However, this removes the ability to provide granular security to each Exchange feature because the HTTP filter settings are applied on a per-rule basis. To achieve granular security, each Exchange feature should be published in a separate rule. Each of the Exchange features described in the following table will be published using separate ISA Server rules. Outlook Web Access Feature rich Web browser-based access Outlook Mobile Access Simple access using a mobile device, for example, Wireless Application Protocol (WAP) Exchange ActiveSync Synchronize directly with Microsoft Windows mobile-based devices RPC over HTTP Microsoft Office Outlook 2003 remote connectivity without a virtual private network (VPN) SMTP Send and receive Internet e-mail A key feature of ISA Server is the ability to pre-authenticate incoming connections by using delegation. In this configuration, ISA Server can test the validity of the provided credentials before they are sent to the published resource. This is done in a seamless way such that users only have to log on once, but they are actually logging on to the ISA Server computer and the published server. We recommend that pre-authentication be used when deploying ISA Server to protect Exchange. This prevents anonymous traffic from reaching the Exchange server, with the exception of SMTP. ISA Server is able to natively validate authentication requests against Active Directory or to other directory services by using Remote Authentication Dial-In User Service (RADIUS). The ISA Server in this guide is a member of the internal Active Directory domain, and thus it has the required access to the domain controllers to perform pre-authentication. RADIUS implementations of ISA Server are less flexible than domain joined scenarios. A key sacrifice is the ability to assign permissions by using group membership. Joining ISA Server to the domain is a recommended approach. Two types of authentication are used in this guide, Basic authentication and forms-based authentication. The following table lists the Exchange features to be published, including the protocol and recommended pre-authentication types used in this document. Outlook Web Access HTTPS (HTTP over SSL) Forms-based or Basic Outlook Mobile Access Basic None (Anonymous) When forms-based authentication is used, a minimum of two external Internet Protocol (IP) addresses are required on the ISA Server computer, one for forms-based authentication and one for Basic authentication. This is because ISA Server 2004 does not support both forms-based and Basic authentication types on a single Web listener, so two IP addresses and two listeners are required. When ISA Server uses the forms-based filter, it receives the user credentials in plain text into the logon page form. ISA Server then validates the credentials against an authentication source (for example, Active Directory) and then replays the credentials using Basic authentication onto the Exchange server. Basic authentication transmits the username and password using Base64 Encoding, which is easily reversed and should be considered to be plain text. For this reason, Secure Sockets Layer (SSL) should always be used between the client and the ISA Server computer, and either SSL or Internet Protocol security (IPsec) encryption used between the ISA Server computer and the Exchange server to secure the users credentials in transit. To enable SSL connections for the Exchange Web-based services, a Web server certificate is required. Each SSL Web server certificate is tied to a specific Domain Name System (DNS) name (unless an SSL wildcard character certificate is used). Because two IP addresses are required to publish the Exchange services when forms-based authentication is used, and each IP address has to be resolved by a different DNS name, two SSL Web server certificates are also required on the ISA Server computer. An additional SSL certificate will be required on the Exchange front-end server to provide end-to-end encryption. If you are familiar with SSL certificates and DNS name resolution, it is possible to use the same SSL certificate on the ISA Server computer as on the Exchange front-end server. However, separate certificates are used in this guide for clarity. Some certificate providers have extra licensing requirements if the certificate is to be used on multiple machines, please check with your provider before purchasing or deploying their certificates to ensure all requirements are met. The ISA Server must trust the SSL certificate on the Exchange server. If the certificate on the Exchange server has been internally generated, the root certification authority (CA) certificate must be imported into the Trusted Root Certification Authorities store on the ISA Server. This can be done manually or by using Active Directory, if ISA Server is a member of Active Directory. We recommend purchasing the SSL certificates for the ISA Server computer from a trusted third-party provider so that external clients automatically trust them. This guide uses the names and IP addresses shown in the following table in the instructions. ISA Server mail.contoso.com 10.0.0.1 owa.contoso.com 10.0.0.2 Exchange front-end server frontend.contoso.com 192.168.0.1 For more information about digital certificates for Outlook Web Access on ISA Server, see Digital Certificates for ISA Server 2004 at the Microsoft TechNet Web site. In this scenario, the external DNS Mail Exchanger (MX) record for the contoso.com domain resolves to mail.contoso.com. In this way, the SMTP service will be published on the 10.0.0.1 address. This deployment scenario uses a split DNS infrastructure. The host records mail and owa, and the MX record must be resolvable by external clients. The host name frontend must be resolvable internally by ISA Server. ISA Server can be configured to resolve this name by adding it to the internal Active Directory DNS zone or by entering it in the local Hosts file. Separate DNS namespaces can also be used if required. This guide details the step-by-step process for creating the rules required to securely publish Exchange Server 2003. The ISA Server computer must already be installed and configured appropriately on the network as previously stated in the network design. The instructions are appropriate for both ISA Server 2004 Standard Edition and Enterprise Edition. Procedures for configuring ISA Server 2004 by creating listeners, dealing with attachments, and creating rules are provided. Procedures for configuring application layer filters for HTTP filtering and SMTP filtering are also provided. Two Web listeners will be created, one for the Exchange Web-based services with Basic authentication and one for Outlook Web Access with forms-based authentication. To configure a Web listener with Basic authentication: To configure a Web listener with forms-based authentication: The details for creating the forms-based authentication Web listener include instructions that recommend blocking all attachments from being downloaded using Outlook Web Access when on a public computer. This helps prevent users from inadvertently leaving saved attachments on a public computer. When a user first logs on to Outlook Web Access using the form, they are given the option to specify which type of computer they are using, either a Public or shared computer or a Private computer. This choice is left to the user, although the default selection is that of a Public or shared computer. When a Private computer is selected, a warning appears as shown: Because the choice is left to the user, some organizations may want to block access to attachments using Outlook Web Access regardless of the type of computer the user is using. If you decide not to block attachments, some attachments such as Microsoft Windows Media files and Microsoft Excel spreadsheets cannot be opened directly by a client connected remotely to an Outlook Web Access server. An attempt to open such a file will result in a failure of the application associated with the file. Those files must be saved locally before they can be opened. You can avoid this problem by configuring Exchange Server 2003 to force users to save attachments by configuring the following registry key on the Exchange Server computer: HKLM\SYSTEM\CurrentControlSet\Services\MSExchangeWEB\OWA\Level2FileTypes. Four Web publishing rules and one server publishing rule will be created to allow access through ISA Server to the Exchange services. To create a rule for Outlook Web Access: To create a rule for Outlook Mobile Access: To create a rule for Exchange ActiveSync: The New Mail Server Publishing Rule Wizard does not have an option to configure RPC over HTTP. You will use the Exchange ActiveSync instructions described in the previous procedure, and then edit the minor differences. To create a rule for RPC over HTTP: To create a rule for SMTP: The table in the following section details the HTTP filter settings required for each HTTP rule previously created. These settings provide the granular security settings specific to each Exchange feature. Apply all the settings from the table to the rules. Each column corresponds to a respective rule. To apply settings: General tab Maximum headers length 32768 Maximum payload length 10485760 65536 Any RPC_IN_DATARPC_OUT_DATA Extensions tab Action taken for file extensions Block specified extensions (allow all others) Allow only specified extensions Extension list .asax.ascs.bat.cmd.config.cs.csproj.dat.dll (Note 2).exe (Note 1).htr.htw.ida.idc.idq.ini.licx.log.pdb.pol.printer.resources.resx.shtm.shtml.stm.vb.vbproj.vsdisco.webinfo.xsd.xsx . (dot).aspx . (dot) .dll Block requests containing ambiguous extensions Headers Tab Blocked headers None Signatures Tab Blocked signatures:Request URL ./\.. (Note 3)% (Note 3)& (Note 3) ./\..%&: ./\..%: ./\..%& This section discusses the SMTP Request for Comments (RFC) and how ISA Server 2004 can be used to restrict access to an Exchange server by using the SMTP protocol. The suggested settings in this section are relevant for scenarios where ISA Server is publishing an Exchange server or third-party SMTP gateway to the Internet. The restrictions describe the usage of the SMTP command verbs and field string length enforcement. RFC 821 is the original specification for SMTP mail with which all SMTP servers should be compliant. Section "4.5.1 Minimum Implementation" in RFC 821 ( page 41) shows that seven minimum SMTP commands are required for communication: RFC 821 (and others) have been superseded by RFC 2821, which is designed to be a more appropriate specification for the Internet usage of SMTP. According to section "3.5.1 Minimum Implementation" in RFC 2821 ( page 53), there are nine minimum SMTP commands required for communication: The new RFC introduced the EHLO command to replace the HELO command. This new command was designed to allow for extensions beyond the original SMTP specification. The EHLO command is also designed to list the SMTP extensions that the host supports. A common SMTP extension that is used that is not mentioned in RFC 2821 is the AUTH command. The AUTH command allows for authenticated SMTP sessions and is described in RFC 2554 (). The only dependency the AUTH command has is the use of the EHLO command, although many mail systems will also accept AUTH requests in conjunction with the HELO command. RFC 1830 introduces the concept of transmitting large and binary messages more effectively by specifying the length and size of the message, instead of the receiving server scanning the input for the carriage return/line feed (CR/LF) sequence. This RFC makes use of the EHLO command and a server supporting RFC 1830 may advertise support for CHUNKING. The BDAT verb is used in place of the DATA verb when CHUNKING is used. For more information about CHUNKING and BDAT, see RFC 1830 (). There are many RFCs to support SMTP that introduce numerous commands and verbs for different purposes. This is further complicated by various interdependencies and version compatibility levels. Although a minimum set of commands are described in the preceding RFCs, the following commands are not typically used in Internet SMTP communication: They may be required in situations where the SMTP server you are publishing is designed to relay large number of e-mail messages. These two commands do not require any variable inputs and should not pose a threat to the system when enabled. In contrast, the following two new commands introduced in RFC 2821 can be used to reveal extra information about an SMTP server or the messaging infrastructure behind it: Modern mail systems have removed the majority of the VRFY functionality because it had become a useful tool for spammers to retrieve valid SMTP addresses from an organization. The command VRFY * would typically list all the valid e-mail addresses in the organization. The EHLO command can also be used for information gathering purposes to determine details about the mail system. However, the value of this information is negligible and could easily be determined by other means. We recommend that string lengths be enforced with ISA Server to help prevent abuse of the SMTP server and protect against buffer overflow attacks. ISA Server has default string length restrictions that should be used. RFC 2821 section 4.5.3.1 indicates some size limitations. However, these limitations are generically large and are not specific to each command. A general restriction of 512 characters is stated in the RFC, however, ISA Server 2004 enforces stricter command lengths for added security. Commands that do not require variable input, for example RSET and DATA, are limited to six characters (four for the command and two for the <CRLF> character). These restrictions should not impact normal functionality. RFC 2554 explicitly states "The BASE64 string may in general be arbitrarily long" regarding authentication commands. It also states that "Clients and servers MUST be able to support challenges and responses that are as long as are generated by the authentication mechanisms they support." The authentication limitations imposed by ISA Server 2004 by default are in line with the length requirements of Internet Information Services (IIS), SMTP, and Exchange. The following table lists the common SMTP commands and which RFC they are listed in as a minimum requirement. You may want to enforce a specific RFC or combine various RFCs. EHLO X HELO MAIL RCPT DATA RSET NOOP QUIT VRFY AUTH BDAT The Config #1 column in the preceding table lists the minimum requirements for SMTP communication and is a subset of the original RFC 821. The Config #2 column lists a recommended set of commands that will provide better compatibility with most SMTP systems and is fully compliant with RFC 821. Config #2 should pose no additional threat to the SMTP server but will disclose some information using the EHLO command. To configure the ISA Server computer: Even though ISA Server will block unwanted SMTP verbs, we also recommend stopping the SMTP server from advertising Extended Simple Mail Transfer Protocol (ESMTP) services that are blocked by ISA Server. This is to prevent ISA Server from inadvertently disconnecting valid SMTP sessions. For example, a sending SMTP server may use EHLO to obtain a list of supported ESMTP services. It may then try to use a verb that the SMTP server is advertising (such as BDAT, a CHUNKING alternative of the DATA verb), which ISA Server may be blocking. If the sending server tries to use an advertised verb, which ISA Server is blocking, ISA Server will terminate its session and the mail will not be received. The sending server will continually attempt to deliver the message unsuccessfully until it is regarded at undeliverable and returned to the sender. There are two methods that can be used to resolve this problem. Either enable the advertised commands in the SMTP filter and allow them through ISA Server, or remove support for the verbs from the SMTP server. For details about disabling SMTP verbs on IIS and Exchange, see the Microsoft Knowledge Base article 257569,"How to turn off ESMTP verbs in Exchange 2000 Server and in Exchange Server 2003". For information about Exchange Server protocols, see "Chapter 9 - Protocol Virtual Servers in Exchange Server 2003" in the Technical Reference Guide for Exchange Server 2003 at the Microsoft TechNet Web site. For information about how to deploy Outlook Web Access in Exchange Server 2003, see the Exchange Server 2003 Deployment Guide at the Microsoft Download Center and Planning an Exchange Server 2003 Messaging System at the Microsoft Download Center. For information about how to deploy Outlook Web Access in Exchange 2000 Server, see Outlook Web Access in Exchange 2000 Server at the Microsoft Download Center and Customizing Microsoft Outlook Web Access at the Microsoft Download Center. Additional ISA Server 2004 documents are available at the ISA Server 2004 Guidance page. For the latest information, see the ISA Server Web site.
http://technet.microsoft.com/en-us/library/cc713326.aspx
crawl-002
en
refinedweb
What's new in FlexCel Studio for .NET New on v 6.23 - November 2018 Updated minimum Required Android version to 8.0 Oreo. As required by Xamarin and Google Play, now the minimum supported Android version is 8.0 (API Level 26). We removed calls to deprecated methods and now require methods only available in API Level 26 or newer. New methods UnshareWorkbook and IsSharedWorkbook in ExcelFile. The method UnshareWorkbook allows you to remove all tracking changes from an xls file. (FlexCel doesn't preserve tracking changes in xlsx files). IsSharedWorkbook allows you to know if an xls file is a shared workbook. New method PivotTableCountInSheet in ExcelFile. The method PivotTableCountInSheet returns the number of pivot tables in the active sheet. Support for calculating function RANK.AVG. Added support to calculate the Excel function Rank.AVG which was introduced in Excel 2010. See supported excel functions. Now you can find see the call stack in circular formula references when you call RecalcAndVerify. Now RecalcAndVerify will report the call stack that lead to a cell recursively calling itself, making it simpler for you to track those down in complex spreadsheets. Take a look at the modified Validate Recalc demo with a file with circular references to see how it works. Bug Fix. Some xlsx files with legacy headers could fail to load. Bug Fix. The function IFNA could in very rare corner cases return #N/A if its first parameter was #N/A instead of returning the second parameter. Bug Fix. There could be an error when copying sheets between workbooks and the sheet copied had a shape with a gradient. Bug Fix.. New on v 6.22 - October 2018 Support for Excel 2019. Because we support Excel 365 and changes in Excel 2019 are a recollection from the changes in office 365 from 2016 up to now, FlexCel already supported Excel 2019. For example, support for recalculating the new functions introduced in Excel 2019 was introduced by FlexCel 6.7.16 back in march 2016. But this new FlexCel version adds a new TRecalcVersion.Excel2019 enumeration which will avoid the question about saving for changes when closing the file. It also adds a "v2019" enumeration to TFileFormats, which allows you to specify you want the file to identify itself as office 2019 and comes with empty 2019 files to be created with NewFile. Empty 2019 files are virtually identical to empty 2016 files, but the colors "Accent1" and "Accent5" in Excel 2016 are swapped to correspond to "Accent5" and "Accent1" respectively in Excel 2019. Reports now can use tables as datasources. Now you can use Excel tables as sources for reports. Take a look at the new Tables as datasources demo and the section about excel tables in the Report designers guide. New method to rename tables. The new method RenameTable can rename a table to a newer name, changing all references in formulas to the new name. New Debug mode for Intelligent Page Breaks. Now you can use the property DebugIntelligentPageBreaks in a report, or the methods DumpKeepRowsTogetherLevels and DumpKeepColsTogetherLevels in the API to debug how intelligent page breaks are working. Look at intelligent page breaks in the API Guide for more information on how to use the feature. Better drawing of conditional formats at very low or high zoom levels. Now icons and databars in conditional formats dynamically adjust the margins to look better at high or low zoom levels. Bug Fix. Cell indent was not being considered when autofitting rows or columns. Bug Fix. FlexCel wouldn't let you rename a sheet to the same name but with different upper or lower cases. Bug Fix. CountIF, CountIFs and similar xIf/xIfs functions could return ERRNA if one of the conditions was an unknown user function, instead of returning 0 as Excel does. Bug Fix. The function Rank.EQ was ignoring cells with errors while Excel returns the first cell with error if any cell in the range has an error. Bug Fix. Inside a <#preprocess> section of a report a <#delete row> or <#delete column> tag could end up deleting the wrong column. Bug Fix. Error when calculating What-If tables that had their variables in a different sheet. Bug Fix. When deleting rows in reports with multiple levels of intelligent page breaks the engine could calculate more page breaks than necessary. Bug Fix. FlexCel will now validate that a table isn't named the same as a defined name or vice-versa, to avoid creating invalid Excel files. Bug Fix. When rendering a file to pdf or images FlexCel could pick the wrong normal font in very rare cases. Bug Fix. APIMate could report code that wouldn't compile for embedded xml content. New on v 6.21.6 - September 2018 Updated SkiaSharp to 1.60.3. FlexCel will now use SkiaSharp 1.60.3 Improved Linux support. Many small bug fixes and updates to the Delphi Linux support. Bug Fix. FlexCel would fail to load "Strict Open XML files" with formulas which returned dates. Bug Fix. FlexCel could crash when rendering xls files with rare images. New on v 6.21.5 - August 2018 Bug Fix. FlexCel could fail to parse complex structured references in tables. Bug Fix. Formulas that referred to different files could refer to the wrong sheet on those linked files in some rare cases. Bug Fix. the IFERROR function could give a #VALUE! error in some cases when used chained with other functions. New on v 6.21 - July 2018 Bug Fix. If a "rows to repeat at top" or "columns to repeat at left" range was outside the print area, FlexCel would ignore it, while Excel will use it anyway. Now FlexCel behaves like Excel and uses the repeating range even if it is outside the print area. Bug Fix. When in R1C1 mode, full ranges expanding more than 1 row or column like for example Sheet1!3:5 could be returned as Sheet1!5 only. Bug Fix. Sometimes cells formatted as "center on selection" were not rendered when exporting them to pdf or html. Bug Fix. When hiding a column without a given width and the default column width was different from the Excel default, the column wouldn't be hidden when saving as xls. Bug Fix. There could be an error in ClearSheet with some special images. ApiMate now reports hidden sheets. ApiMate will now tell you how to hide sheets. Improved chm help. The chm help shipped with FlexCel could show javascript errors in some Windows versions. New on v 6.20 - June 2018 Full support for reading and writing Data Connections in xlsx files. Now you can use the new methods GetDataConnections and SetDataConnections for reading and writing the data connections in xlsx files. As usual, APIMate will show you the commands needed to enter new DataConnections. Note that the new methods only work in xlsx files, not xls, and there is no support for refreshing data queries from FlexCel. Only to read or write connections. Improved performance with thousands of merged cells. We rewrote the merged cell handling engine to make it faster and work better when there are thousands of merged cells. Breaking Change: Improved chart rendering. Now FlexCel recalculates the size of the legends of the charts if those are docked to the top, bottom, right, left or top-right. So if the size of the series change, the legend box and the rest of the chart will adapt. There are also other small tweaks in the chart rendering engine to make xls charts more faithful to what Excel shows. Note: The Excel chart engine has changed a lot since the Excel 2003 times, and Excel 2003 doesn't display charts exactly as Excel 2016. We can't make it work both ways, so this update makes the chart rendering more like Excel 2016. If you were rendering old files and relied in the exact position of the legend, this update might move the position of the legend a little, to position it how Excel 2016 would and not how Excel 2003 would. This is why it is a breaking change. New overloads for methods SetCellFromString and GetStringFromCell now accept cell references. The methods SetCellFromString and GetStringFromCell now can use references like A1 or "Sheet1!B3". This is a shortcut in using a TCellAddress object to get the row and column from the reference. New overload for method TPartialExportStart.SaveCss which allows to save the css without the tags. There is now an overload of TPartialExportState.SaveCss with a parameter that allows to get the inner html of the classes definition, without the enclosing tags. New on v 6.19.5 - May 2018 Now functions CUMIPMT and CUMPRINC are supported when recalculating. Now FlexCel can recalculate the functions CUMIPMT and CUMPRINC New methods GetTokens and SetTokens in ExcelFile allow you to parse arbitrary text. The new methods GetTokens and SetTokens allow you to parse any text into tokens and then convert those tokens back into a string. Those methods complement the existing GetFormulaTokens and SetFormulaTokens The XlsChart object now returns the 3D properties for xls charts. Now you can read the 3D properties in charts inside xls files. Improved Excel 95 compatibility. Now FlexCel can read some Excel 95 files which would throw errors before. Now FlexCel preserves "new style" sheet and workbook protections in xlsx files. Both FlexCel and Excel use an old algorithm to compute sheet and workbook protections, and they both keep doing it this way as it is the only way to port the protections between xlsx and xls files. But some third-party generated files could have a newer style of protections which are incompatible with xls and FlexCel wasn't understanding them. Now FlexCel will preserve those new style protections in xlsx files too. The new style protections will be lost if you save as xls, but that happens in Excel too. When wrapping text, now FlexCel recognizes different kind of unicode spaces. Now other spaces in addition of character 32 are used as separators when rendering the file and wrapping the text. Note that not breaking spaces (char 160) are still not used as separators as they aren't supposed to break a line. Bug Fix. SetCellFormat with ApplyFormat could format the cells wrong if the cells were empty and there was column or row format applied. Bug Fix. Sometimes when copying sheets form different files, some named ranges would not be copied. Bug Fix. Khmer text could be rendered wrong is some rare cases. Bug Fix. When exporting to pdf you could get an error if a character didn't exist and fallbackfonts was empty. New on v 6.19.0 - March 2018 Support for Khmer language when exporting to pdf. The PDF engine in FlexCel now includes a Khmer shaper which is able to correctly create Khmer documents, as long as the Khmer fonts you are using are OpenType (that is they contain GSUB and GPOS tables). Reduced memory usage when exporting. Exporting to PDF and SVG were tweaked to consume less memory in high-performance environments where many threads are exporting at the same time. Also the performance of the pdf engine was improved. Updated the SkiaSharp library used in .NET Core to the latest. SkiaSharp used is now 1.60, and code was adapted to remove LockPixels/UnlockPixels which don't exist anymore. Note that due to this changes, FlexCel won't work anymore correctly with SkiaSharp 1.59. Images made transparent with Excel now are converted between xls and xlsx files. Now FlexCel will convert the transparent color parameter between xls and xlsx files. Bug Fix. In some cases after copying rows, then deleting sheets and then inserting or deleting rows, the formula bounds cache could be invalid and formulas would fail to update in the lase deleting of the rows. Bug Fix. The round function now behaves more like Excel and not like C# in some border cases. Bug Fix. Formulas with intersections of a name with a 3d range would be interpreted as #name instead of the correct formula. Bug Fix. In some invalid cases the indirect function would throw exceptions that would be later processed. While the result was still ok, those exceptions could slow down a lot recalculation in a file with thousands of formulas. New on v 6.18.5.0 - January 2018 New convenience methods SetCellValue(cellRef, value) and GetCellValue(cellRef). The new methods SetCellValue(cellRef, value) and GetCellValue(cellRef) allow you to set or get a cell value directly from a text reference like "A1" without having to use a TCellAddress class. Support for shape connectors in xlsx. Now FlexCel will preserve connections between shapes in xlsx, and convert them from xls to xlsx and viceversa. We've also added the properties IsConnector StartShapeConnection and EndShapeConnection to allow you to enter connections with the API. As usual, APIMate will tell you the code needed to add a connector from an Excel file. Bug Fix. The VLOOKUP and HLOOKUP functions now support wildcards (* and ?) in search strings. Improved compatibility with invalid files generated by third-party tools. Some xlsx generators can write invalid column widths. Now when exporting to html/pdf, FlexCel will consider those widths as default (like Excel does) and not 0 (as it used to do). Bug Fix. FlexCel could fail to parse some structured references in tables. Bug Fix. When calculating UDFs and there were errors in the arguments, FlexCel could in some cases return #ERRNAME! instead of evaluating the UDF. Bug Fix. In some files the calculated height for items inside Forms Listboxes was too big. Bug Fix. FlexCel failed to read custom document properties saved in UTF16. Bug Fix. Reports with DeleteEmptyBands = TDeleteEmptyBands.ClearDataOnly would not clear text inside textboxes or hyperlinks. Bug Fix. In some bidirectional reports with report.DeleteEmptyBands = TDeleteEmptyBands.MoveRangeUp the tag text was not erased. New on v 6.18.0.0 - December 2017 Support for default CryptoAPI xls encrypted files. Now FlexCel can read and write xls files encrypted with the CryptoAPI encryption. This is the default encryption algorithm for files created by Excel 2003 or newer. With this addition, all modes and encryption algorithms in both xls and xlsx are now supported. Full support for manipulating XML Mappings in xlsx files. Now XML Mappings will be preserved when opening and saving xlsx/m files, and there are two new commands in the API to set them or read them with code. The new commands are GetXmlMap and SetXmlMap . As usual, APIMate will show how to use SetXmlMap. Note: The new API only works in xlsx/x files, not xls. Xml mappings inside xls files will still be preserved when opening and saving xls files, but not converted between xls and xlsx. Bug Fix. Images made transparent with Excel tools might not preserve their transparency when saved as xlsx. Bug Fix. in .NET Core 2.0 Exceptions thrown by FlexCel would display the message 'Secure binary serialization is not supported on this platform' instead of the actual error message. Bug Fix. When rendering shapes with semi-transparent gradients to PDF or SVG the gradients were exported as fully opaque. Bug Fix. Files with table slicers saved by FlexCel might not open in Excel 2013. (They already worked fine in Excel 2016, and Excel 2010 doesn't support table slicers). Bug Fix. Rotated shapes inside groups in xlsx files could be rendered wrong. Bug Fix. Groups that were flipped horizontally or vertically weren't flipped when rendering. Objects inside were flipped, but the groups themselves weren't. Bug Fix. Filled polygons could be exported wrong to PDF in some border cases. Bug Fix. Filled polygons could be exported wrong to images with the SKIA backend used in .NET Core and Android. Bug Fix. Legacy system colors in drawings inside xls files could be rendered as transparent instead of the correct color in border cases. Bug Fix. Xlsx files with complex gradients where the stops were not sorted could cause invalid PDF files. Bug Fix. Textboxes with more than 8224 characters would corrupt the file when saved as xls. Updated SkiaSharp to 1.59.2 for .NET Core. Now FlexCel will require SkiaSharp 1.59.2 when using it in .NET Core. New on v 6.17.4.0 - November 2017 Breaking Change: Subtotal command allows more customization. Now the Subtotal command provides more parameters in the callbacks to allow for more customization. In addition, by default it will write a better text for non sum aggregates (like for example "Customers Average" instead of "Customers Total" if you are using the Average to aggregate). There is also a new example on how to use the command. Note: This is a breaking change if you are using the callbacks since now the callbacks have more parameters. But it is easy to fix at compile time, just add those parameters to the callbacks and recompile. New SubtotalDefaultEnglishString command. Now the SubtotalDefaultEnglishString provides the string used by the different aggregate functions used in Subtotal . You can use this method as a parameter to subtotal to calculate the grand total and subtotal labels. Ability to copy OLE objects between different files while using xlsx file format. Now the restriction that you can't copy sheets from one file to another if they have embedded OLE object has been removed for xlsx files. It is still not possible to copy sheets between different files with embedded OLE objects in xls. Ability to read custom document properties in xls files. Up to now FlexCel could only read custom document properties in xlsx files. Now it can also read them in xls files. And now custom properties are migrated from xls files to xlsx too. Better handling of URL encoding when encoding some filenames. Now some filenames containing some characters like "#" will be correctly encoded when linked from FlexCel. The events that allow you to manually define the links have a new parameter "UrlNeedsEncoding" which you can set to false to avoid all encoding by FlexCel if you provide an already encoded URL to the event. Bug Fix. The Last print time document property wasn't read in xlsx files. Bug Fix. When copying cells from one file to another autofilters would be copied even if they were not in the range being copied. Bug Fix. Formulas referencing sheets which could be interpreted like a R1C1 cell reference (like "R3C5") were saved without quotes in the sheet name, and thus became invalid formulas. Bug Fix. Modified and creation time were read in UTC, but saved in local time which would result in a different date being saved back. Now it is all handled in UTC. Bug Fix. In some very complex bidirectional reports with sorting in the template the fields might end up not being sorted correctly, and some might appear twice. New on v 6.17.3.0 - October 2017 New TFlxNumberFormat.PercentCount method. The new method TFlxNumberFormat.PercentCount allows you to know how many non escaped % symbols are in a format string. Better display of negative zero numbers. Now a negative number that displays as zero like "-0.001" formatted with a "0.0" format string will display as "0.0" and not "-0.0" iOS demos updated to require iOS 8 or later so they can be compiled with XCode 9. iOS demos targeted iOS 6, which isn't supported in XCode 9. Now we target iOS 8. New on v 6.17.2.0 - October 2017 Better support of machine locale formats. Before this version, whenever you used a "machine dependent" date numeric format (those shown with * in Excel), FlexCel would use always 2-digit months, 2-digit days and 4-digit years. (as in 01/02/2000). Now it can use single digits for days and months, and 2 digits for years (as in 1/2/00) if your machine locale is set to a format that uses those. Improved xls chart rendering. Now series which are in hidden columns or rows won't count as series at the moment of drawing the chart, to better copy Excel behavior. Before this version those series would appear empty, but still take space in the chart. Improved compatibility with invalid 3rd party xlsx files. Now FlexCel can open files where the case of the files inside the container is incorrect. This happens with files generated by "1C" database and might happen with other 3rd party files. New on v 6.17.0.0 - September 2017 Full Support for Excel tables in xlsx files. This release completes the support for tables in the FlexCel API introduced in 6.11. - Tables are now exported to PDF/HTML/SVG/Images and printed with all the table formatting including banded columns and rows, etc. All formatting is supported. - Now FlexCel can recalculate the structured references used in tables. Everything is supported, from simple references like Table1[@column] to references in tables from another file. (for external table references you need to create a Workspace ) - Complete API for adding, deleting or modifying tables with code. APIMate was modified to show how to use the new things in the API. - API for adding, deleting or modifying custom table styles. APIMate shows how to enter table styles with code. Support for reading and writing Strict Open Xml files. Now FlexCel can read and write Strict Open XML spreadsheets. The default is to save to strict xml only if you opened a strict xlsx file and saved it, in the other cases we fall back to the standard transitional xlsx. There is a new property StrictOpenXml which you can set to force saving as strict xlsx, and read to know if the file you opened was strict xlsx. Support for .NET Standard 1.5, 2.0 and .NET Core 2.0. FlexCel nuget package contains a .NET Core 2.0 assembly, and .NET standard assemblies. The .NET standard assemblies can be referenced in multiplatform projects and they will be replaced by the corresponding native assembly. Ability to add autoshapes to charts. Now the existing method ExcelFile.AddAutoShape works also in chart sheets, and there is a new method ExcelChart.AddAutoShape that allows to add shapes to charts embedded inside a sheet. FlexCel will now preserve embedded OLE objects in xlsx files. Now FlexCel will preserve embedded OLE documents (like for example a word document) in xlsx files. Improved performance in reports with thousands of hyperlinks. Now FlexCel is much faster dealing with thousands of hyperlinks in reports. <#row height> and <#column width> tags in reports now accept expressions. Now you can write something like <#row height(<#someexpression>)> where expression will be calculated at the moment of running the report. Now FlexCel converts strings with timestamps to dates more like Excel. In Excel you can write a string with an invalid timestamp like "3:61" (3 hours 61 minutes, which is 4 hours 1 minute) and it will be accepted by the system. FlexCel was rejecting those timestamps, but now it accepts them just like Excel. Support for #GETTING_DATA error in TFlxFormulaErrorValue. The enumeration TFlxFormulaErrorValue now contains a new ErrGettingData member which corresponds to the type in Excel. Also Error.Type function will return 8 for this error. Note that Excel doesn't save this error in xlsx files (it saves #N/A instead), but it does save it in xls files. FlexCel preserves it in both. Better support for comments in xlsx file in high dpi. The size of the comments is preserved better now when ScreenScaling is > 0 Now ExcelFile.RenderObject can render shapes inside groups and use an objectPath parameter to specify the name of the object to render. There are new overloads in ExcelFile.RenderObjects and ExcelFile.RenderObjectAsSVG that take an objectPath parameter. This allows you to render an individual shape inside a group instead of the full group, and also to specify directly the name of the shape to render as in xls.RenderObject(1,"@objectname") Reduced memory usage when loading fonts for exporting to PDF. We've optimized the pdf font engine so it uses less memory when loading the fonts. Support for returning arrays with the INDIRECT function. When doing a Sum, SumIf or N of an Indirect function which returned an array, FlexCel worked like Excel 2003 or older and only used the first value of the array. Now it uses the full array in SumIf and N like Excel 2007 or newer, and in Sum, like Excel 2010 or newer. This allows you to write formulas like the ones mentioned here: Note that this formula behavior is exclusive to Excel 2010 or newer: Neither LibreOffice or Google docs implement it. All examples available in Github. Now besides being available with the setup and at the documentation site the examples are also available on Github New on v 6.16.0.0 - August 2017 User defined formats in reports. Now you can define a function in code that will format the cells depending on complex rules. Take a look at the new User Defined Formats demo to see an example on how to use them. Now when signing PDFs, FlexCel will mark the generated files as requiring Acrobat 8. Due to known vulnerabilities in SHA1, signing with SHA1 is deprecated. So now the FlexCel signing demos have been modified to use SHA512. As SHA512 requires Acrobat 8 or newer, now the files will be marked as requiring Acrobat 8 or newer. Note that older acrobat versions will still be able to see the file, but they won't validate the signatures. Added recalculation support for new functions. Added support for CEILING.MATH , FLOOR.MATH functions added in Excel 2013. Improved behavior of CEILING/FLOOR functions. Now the existing CEILING, FLOOR, ISO.CEILING, CEILING.PRECISE and FLOOR.PRECISE functions are calculated more in the way Excel uses them, with a higher precision to prevent rounding errors like 1.00000...00001 to be rounded up to 2. Breaking Change: Support for double underlines when exporting to pdf and refactored TUIFont. Now double underlines will be exported to pdf. In order to support this we had to remove the Underline and Strikeout parameters from the TUIFont definition, and put them in a separate TUITextDecoration structure. So now the DrawString methods in PdfWriter require a new TextDecoration parameter, and you don't specify the underline or strikeout anymore in TUIFont. Also as now TUIFont is different from GDI+ Font by not having underline into it, now you can't convert automatically between a Font and a TUIFont. This change can break some code if you are using TUIFont and DrawString directly, but it should break at compile time and be straightforward to fix. Just create the TUIFonts without underline, and specify a text decoration when calling drawstring with those fonts. Take a look at the updated Creating pdf files with pdf api demo to see how it works now. New methods TUIFont.CreateFromMemory and TUIFont.CreateFromFile. The new methods TUIFont.CreateFromMemory and TUIFont.CreateFromFile allow you to create TUIFonts from fonts not installed in the system. Improved conversion of control points in autoshapes between xls and xlsx files. Now for some shapes like a roundrect or a smiley face are converted better to xlsx when read from xls files. The default control points for those shapes weren't converted correctly. Now you can enter macros that refer to other files with the API. Now when you call AddButton or similar methods, you can use a macro that refers to a different file like file2!macro1. As usual APIMate will report the exact syntax to link to a different file. Breaking Change: Added a new parameter to ExcelFile.RecalcRange. Now when you call RecalcRange you need to specify if the formula has relative references (as is the case in conditional formats, data validations and names) or if the references in the formula are absolute (as it is the case in normal spreadsheet formulas). Before this version RecalcRange assumed absolute references, so if you are updating existing code and want to keep the exact behavior just add ", false)" as last parameter. But make sure to review that the formula is not a relative formula. Breaking Change: Bug Fix. The parameters MaxWidth and MinWidth of the <#column width> and <#row height> tags weren't working properly when autofitting. Now they work according to the docs. If you were using MaxWidth and MinWidth in <row height(autofit...)> or <column width(autofit...)> please review those tags and make sure minwidth and maxwidth are in the correct positions. FlexCel will now check names of tables are valid when you create a table with the API. Now FlexCel won't let you name a table with an invalid name (like for example a name containing spaces). Bug Fix. When the print zoom was bigger than 100% the maximum column to print could be calculated wrong. Bug Fix. When evaluating data validations with CheckDataValidation introduced in FlexCel 6.15, INDIRECT functions using RC notation were evaluated wrong. Bug Fix. When doing bidirectional reports with multiple horizontal master detail X ranges, the rows for the vertical ranges could be wrong. Bug Fix. When a SPLIT tag was used inside a multiple master-detail relationship in a report the results could be wrong. SKIA library updated to the latest. We have updated the code that uses the SKIA library in .NET Core to the latest version of the library. We removed calls to deprecated methods and replaced them with equivalent methods. Bug Fix. When exporting arabic rich text with multiple formats in the same cell and a scale factor different from 1 to pdf the results could have the wrong font sizes. Bug Fix. Row() and Col() functions would return 1 when called from <#Format range> or <#Delete range> tags. Now they return the row and column of the cell where the tag is written. Examples and demos now use SQLite. The database used in the examples was migrated from SQL Server CE to SQLite. This was because SQL Server CE is deprecated by Microsoft. Bug Fix. Setup wasn't correctly registering the .NET 4 and newer assemblies in Visual Studio, so those wouldn't appear in the "Add reference" dialog as Extensions. (They were still available if you browsed for them) New on v 6.15.0.0 - May 2017 Ability to check and evaluate data validations. The new methods CheckDataValidation , CheckDataValidationsInSheet and CheckDataValidationsInWorkbook allow you to check if the values of a cell, sheet or workbook are in conformance with the data validations in those cells. You can also check if a value would be valid when entered into a cell with CheckDataValidation Improved tagging of PDF Files. Now repeated columns and rows are tagged as artifacts and not real content. This will also fix a rare bug that could happen when exporting a file with "Columns to Repeat at left" to tagged pdf. Improved HTML5 exporting. Updated the HTML5 exporting to comply with the latest HTML5 standard. Improved bidirectional reports. Now you can make a bidirectional report where the vertical range doesn't cross the horizontal range, but just is contained exactly in the top and bottom coordinates of the horizontal range. Improved unicode Bidi Algorithm. Updated the bid algorithm from version 3.0 to 6.3. Fixed errors that could happen when using invalid unicode characters. Improved compatibility with invalid files. Now FlexCel won't throw an exception by default when the file has invalid hyperlinks. You can change this behavior by setting the new "ErrorOnXlsxInvalidHyperlink" property in ExcelFile.ErrorActions Bug Fix. FlexCel failed to open some encrypted Xlsx files saved in Excel 2013 or newer where the key size in the algorithm to encrypt the file was different from the key size in the algorithm to encrypt the key. Bug Fix. When exporting to html a merged cell which had columns or rows not hidden but with zero width or height could render wrong. Bug Fix.. New on v 6.14.0.0 - April 2017 New documentation center. We've completely redesigned the documentation, including lots of new code examples, a tips and tricks section and much more. We've manually reviewed all the user guides to make sure they are up to date with the latest information. You can find the new documentation center at our website Note: This version removes support for integrating help in Visual Studio 2012 or older. For those versions you can't use F1 to get help and will have to manually search in the web or in the included chm file. Also, when you press F1 in FlexCel types you will be redirected to the FlexCel online help. Setup won't integrate the offline help anymore for any Visual Studio version. Official support for .NET Core including graphics. Now FlexCel for .NET Core is out of the beta and fully functional. It now also supports creating pdf files, html, etc., using SkiaSharp for the graphics support. Support for .NET 4.7. There is a new FlexCel.dll targeting .NET 4.7. Improved Setup. Now the setup.exe for Windows will include .NET Core and Xamarin libraries and examples, so there is nothing extra to add. The nuget package flexcel-dnx for .NET Core has been now replaced by FlexCel.nupkg which supports all platforms that FlexCel supports. The setup will now automatically register the FlexCel NuGet package folder so you can just add it from Visual Studio. Breaking Change: Deprecated Xamarin Components. Now we don't distribute FlexCel anymore via Xamarin Components. Those have been replaced by the TMS.FlexCel NuGet package. If your apps used Xamarin Components, remove them and add a reference to the TMS.FlexCel NuGet package. Support for Web Addins, either of Content or Task pane types. Now FlexCel will preserve Web addins in xlsx files. There are 2 new methods in the API: ExcelFile.HasWebAddinTaskPanesand ExcelFile.RemoveWebAddinTaskPaneswhich you can use to know if there are any task pane addins in the file and remove them. The content addins are just objects and you can remove them with DeleteObject. You can find if an object is a Web Add-in by calling the new property TObjectProperties.IsWebAddin Support for Table Slicers. Now FlexCel will preserve Table Slicers in xlsx files (xls files don't support them). Note that Pivot Table Slicers where already preserved, this refers to the new Table Slicers in Excel 2013. New static method CreateKeepingAspectRatioin TClientAndhorallows you to fit an image inside a range of cells maintaining the aspect ratio of the image. You can either specify the 4 coordinates of the range where you want the image inside, and have the image centered or aligned inside that range, or you can leave one of the coordinates at -1. If you set Row2 or Col2 at -1, then this method will create an image that fits in the other Col1-Col2 or Row1-Row2 respectively, and keeps the aspect ratio. New method SetTable allows to modify existing Tables. While you could modify existing tables by using RemoveTable and AddTable, now you can modify them directly with SetTable. New DrawBorders method in ExcelFile allows to quickly draw a border around a range of cells. This new method is just a shortcut for calling SetCellFormat but it is a little easier to discover and use. Now you can specify multiple folders with fonts when exporting to pdf. Now in the OnGetFontFolder event of a FlexCelPdfExport you can return a list of strings separated by semicolons. So you can return a string like "c:\font1folder;c:\font2folder" and FlexCel will search for the fonts inside font1folder and font2folder. Now <#includes> in reports will balance in the containing band. Now when you include a subreport in a report, the main parent will be balanced as it is with ranges inside ranges. Bug Fix. When "Precision as displayed" was set to true in the file options, the recalculation engine could calculate some values with a different precision than the one in the cell. Bug fix. When a file with dates starting in 1900 had a linked formula to another file with dates stating in 1904 the value of the dates in the 1904 file would be considered to be at 1900. Similar for a 1900 file linking to a 1904. Bug fix. In some rare cases when changing a style an exception could be thrown. New on v 6.13.2.0 - February 2017 New method XlsFile.AddAutoShape to add autoshapes with the API. The new method allows you to add an autoshape to a file. As usual APIMate will provide you with the needed code. Support for Visual Studio 2017 RC. Setup now installs in Visual Studio 2017. Support for .NET Core 1.1. Project.json was changed to a csproj in order to build in 1.1. Support for drawing mirrored images when rendering. Now when a bitmap is flipped vertically or horizontally, FlexCel will also draw it like this when exporting it. New method TCellAddress.DecodeColumn. This method is provided for symmetry with the existing TCellAddress.EncodeColumn, but it is normally not required as you can use TCellAddress to get the full cell address string for a row and a column. This method could be used in the rare case you only wanted the column string and not the full cell address. Breaking Change: Support for losslessly rotated JPEG images. Now when inserting JPEG images that are rotated via the "orientation" attribute in the JPEG file, FlexCel will automatically rotate the image so it appears with the desired orientation in Excel. Before FlexCel behaved as Excel 2010 or older, just entering the image as is, so Excel would show it rotated. Now it works like Excel 2013 or newer, where we rotate the image in Excel to compensate. All orientation values are supported, included mirrors. Note that this change might be breaking if you were manually rotating the images before entering so they would show fine. If you ware rotating the images manually, you should remove the code as now it will be done automatically. There is also a new method ImageUtils.GetJPEGOrientationwhich you can use to tell if a JPEG image is rotated or not. Bug Fix. When autofitting a column which contained multiple lines but the cell was set to not wrap, FlexCel would consider that the cell could wrap and so end up with a smaller column width than needed. Bug Fix. Xml declarations when writing custom xml parts could be duplicated. Bug Fix. When replacing hyperlinks in a shape with a report there could be an exception. Bug Fix. Support for reading xlsx files with custom properties with repeated names or empty names. New on v 6.13.0.0 - January 2017 Support for rendering Right-To-Left sheets. Now FlexCel can export sheets where A1 is at the right side of the page and the cells grow to the left instead of to the right. A new property XlsFile.SheetIsRightToLeftallows you to read or write the right to left state of the sheet directly without needing to use SheetOptions. APIMate will now also suggest SheetIsRightToLeft instead of SheetOptions for RTL sheets. Improved right to left support for text. Now FlexCel will support mixed right to left and left to right text more as the Unicode BIDI algorithm defines it. The Context property of the cell is now also used to figure out if it is rtl text embedded in ltr text or ltr text embedded in rtl text. New static properties ExcelFile.CompressionLevel, FlexCelConfig.XlsxCompressionLeveland FlexCelConfig.PdfPngCompressionLevel. Properties ExcelFile.CompressionLeveland FlexCelConfig.XlsxCompressionLevelare the same and control the zip compression level used to creating xlsx files. FlexCelConfig.PdfPngCompressionLevelcontrols the compression level for pdf and png files. FlexCel uses "zcDefault" zlib compression level, which normally gives the best ratio between speed and size. But note that Excel itself uses zcFastest when saving xlsx files, resulting in faster saves but also bigger files. While you won't probably want to change the defaults, now you can. Note: We require .NET 4.5 or newer for this property to work. New static property FlexCelConfig.DpiForReadingImages. This new property FlexCelConfig.DpiForReadingImagesallows you to force a resolution in the images you are loading. Normally FlexCel will use the resolution stored in the images to calculate the desired width in inches, but now you can override whatever is saved in the file by changing this property. New implementation of wildcard matching for all functions that use them. The new algorithm to match patterns like * or ? and used in functions like MATCH or COUNTIF is now much faster and can use much less memory in pathological cases. New method FlexCelReport.Run(Stream templateStream, Stream outStream, TFileFormats fileFormat). This method allows you to specify the resulting file format when running a report to a stream. Better handling of expressions or formats defined both in an included report and the master report. Now when an included report has the same expressions or formats defined in the config sheet as the master, those local definitions will be used, instead of raising an error of repeated formats/expressions. Breaking Change: Better handling of image resolution in reports. Now when adding an image to a report and resizing it, FlexCel will take in account the image resolution if it is saved in the image. If the image doesn't have a resolution saved, FlexCel will use the screen resolution. You can revert to the old way of assuming a resolution of 96 dpi for all images by changing FlexCelConfig.DpiForReadingImages Bug Fix. When exporting to HTML and a merged cell covered hidden rows or columns, the resulting html could be wrong. Bug Fix. When exporting to HTML with embedded SVG images, the fill colors in the SVG images would be wrong if there were gradients. Bug Fix. When exporting to SVG, text in controls or shapes could go a little lower than it should. Bug Fix. The formula parser would fail to detect some unicode characters as valid characters for a sheet name or named range. Better display of complex numeric formats. Now we handle some complex formatting the same as Excel does, handling also invalid formats which Excel doesn't allow better. Bug Fix.. Bug Fix. In .NET Core we could fail to read formulas Bug Fix. When a file had "Precision as displayed" set and there were cell formats including percentage signs, the numbers might be rounded wrong. Bug Fix. There could be an stack overflow when a camera object rendered a range of cells which included the cells where the camera object was. New on v 6.12.0.0 - October 2016 Improved compatibility with Windows Phone devices. Some windows 8.1 devices have a bug that doesn't let them read resource files (resx). This means that FlexCel would fail when deployed to those devices, even if it would work in the simulator. To fix it, now FlexCel for Windows Phone and Windows RT doesn't use resource files. Improved performance when creating tens of thousands of names in a file. Now when creating a file with tens of thousands of names FlexCel will be much faster. Breaking Change: Bug Fix.and SheetProtectionOptions.Scenariosas they might be reversed. If you are saving as xlsx files, then there is no need to change anything, as xlsx already worked as expected. Better drawing of labels in charts. Now the labels inside charts draw more like Excel when exporting xls files to pdf or html. If multiple labels would overlap, now FlexCel tries to separate them. The leader lines in pie charts from the slices to the legends render better too. APIMate will now suggest to use new TSheetProtection(TProtectionType.All/None) instead of new TSheetProtection(true/false). The constructors using true and false can be confusing, because while they work, they will set all the protection to true and false, and some protections work when the property is true (contents, objects and scenarios) while the others work when the property is false (all the other properties). The constructors using TProtectionType will set some values to true and some to false as needed to have all the sheet protected or unprotected. Bug Fix. When copying sheets in a file, some conditional formats could raise a null reference exception. Improved compatibility with third-party created files. Specifically, we now can read spreadsheets created with google docs which contain pivot tables. Those generate invalid xlsx files lacking required attributes, and FlexCel would complain about them missing. Now it will ignore them, and fix the files if you open and save them in FlexCel. New on v 6.11.0.0 - September 2016 Support for Excel tables in xlsx files. There is partial support for tables in the FlexCel API. - Tables are preserved when editing xlsx files. Note that we refer to the tables introduced in Excel 2007: Other tables like "what-if" tables were already preserved. - Tables will be copied and modified when you insert or copy ranges. - API for reading tables - Preview API for writing tables. Note that this API is not complete yet and might fail in some cases. APIMate will show you how to add a table with the API. - There is no rendering yet (exporting tables to pdf, etc), and no calculation of table references like =SUM(@Table1[column2] New properties FullRecalcOnLoadand FullRecalcOnLoadModein XlsFile. FullRecalcOnLoadwill tell you if the xlsx file opened with FlexCel had the property "Full Recalc on Load" true. When it is true, normally the file doesn't have the values of the calculated formulas and you need to do a manual XlsFile.Recalc() to get the values. FullRecalcOnLoadModeallows you to tell FlexCel how it should mark the files it creates. In the default mode it will mark them as not needing full calculation id they were calculated on save by FlexCel (the default) and mark them as needing recalc on open in other case. Note that those 2 properties only apply to xlsx files: xls files don't have this property and the value returned by those properties will always be false. Some repeated function results are now calculated only once for better recalculation speed. Now FlexCel can detect repeated subexpressions inside a formula and calculate them only once. So for example if you have a thousand formulas like =If(A1,2,3... = Sum($B$1:$E$1000),1,Sum($B$1:$E$1000))then the Sum($B$1:$E$1000) will be calculated only once for all the formulas. This can have significant speed improvements if you have formulas with this pattern. Performance improvements in function calculations. Some of the most used functions like SUM, COUNT, AVERAGE, SUMIF, COUNTIF, etc have been optimized to work at a higher speed. Performance improvements in formula parsing. Now the formula parser is a little faster and that can lead to faster loading of xlsx files with thousands of formulas. Performance improvements loading xlsx files with thousands of comments. Now xlsx files with thousands of comments should load much faster. Improved rendering of numbers which don't fit inside a cell. When a number doesn't fit inside a cell, Excel shows #### instead. But it will always show at least one #, if it can't fit a complete # into the cell, then it will display it empty. FlexCel was showing part of a # sign when a full # sign wouldn't fit, not it behaves as Excel and shows the cell empty. Bug Fix.. Bug Fix. When deleting sheets with locally stored defined names and you had multiple references to those names in a single formula, FlexCel could fail to update the names. Bug Fix. When setting a column format for many columns at the same time and reset cells true, some cells might not be reset. Bug Fix.. Bug Fix. While it is invalid to write a file with conditional formats or data validations with formulas that refer to other sheets, Excel can load them (but you won't be able to modify them). Now FlexCel can read those too without reporting an error. Bug Fix. FlexCel could raise an exception when deleting ranges with conditional formats. New on v 6.10.0.0 - August 2016 Support for converting conditional formats between xls and xlsx files. Together with the support for conditional formats in xlsx files introduced in 6.9, this completes full support for conditional formats in xls or xlsx files. Now the APIs to read and write conditional formats will work in both xls and xlsx, and conditional formats will convert seamlessly between xls and xlsx files, even formats not supported in the original xls97 spec. Support for using formulas in "Text" conditional formats. Formulas weren't allowed in Excel 2007, and FlexCel 6.9 didn't allowed them either. Now they are fully supported, for Excel 2010 and newer. Fixed small validation issues in the xml generated for xlsx files. Some xlsx files generated by FlexCel could have xml that would not validate against the xlsx reference files. Excel would still open those files, but the spec wasn't correctly implemented. Breaking Change: Removed "IsPercent" property from Iconset Conditional Format definitions. IsPercent had no effect in IconSet rules and it is deprecated in Excel. As IsPercent was introduced in the previous version 6.9, it made sense to remove it before it got into wide use. Bug Fix. Sometimes with very complex groups of conditional format rules, some could be ignored when exporting to pdf. Bug Fix. Reversed iconsets were exported not reversed to pdf. Bug Fix. When copying cells with conditional formats from one xlsx file to another, the borders wouldn't be copied. New on v 6.9.0.0 - August 2016. New on v 6.8.8.0 - August 2016 - Bug Fix. When rendering superscripts in multiline cells, the distance between lines could be wrong. New on v 6.8.7.0 - July 2016 - Unknown root parts preserved in xlsx/m files. Now root parts which do not conform to the xlsx spec like ribbon customization or arbitrary parts are preserved when editing xlsx/m files. New on v 6.8.6.0 - July 2016 User Customization parts preserved in xlsx/m files. Now the buttons that you add to the Quick access toolbar in the ribbon for only an specific document will be preserved when you edit the document with FlexCel. Support for latest version of Xamarin Android. A change in Xamarin now requires a reference to Java.Interop: Bug fix. Some invalid png files could cause exporting to pdf to hang. Bug fix. Fixed issues with resources in .NET Core Bug fix. A chart with an empty array as range would throw an Exception when saving in xlsx files. New on v 6.8.5.0 - June 2016 Controls and drawings in xlsx now are stored in the same hierarchy and a control can be below a shape. The xlsx file format introduced in Excel 2007 had controls and drawings in separate parts, so all controls were always above any drawing, and also it wasn't possible to group controls and images. Since Excel 2010 shapes and controls are also stored in a common stream so you can group them or put images above controls. Now FlexCel reads and writes the Excel 2010 parts if available, and correctly puts the controls below the images or grouped if needed. Note that this applies only to xlsx, it was always possible to group controls and images in xls. Bug fix. Grouped shapes with more than 2,147,483,647 emus height (approximately 38,000 rows with standard row heights) would be truncated in xlsx files (xls files are always truncated anyway since that is a limitation of the file format). Bug fix. External links could fail to load in xlsx files in Excel 2007. (Excel 2010 and up were already ok) New on v 6.8.4.0 - June 2016 New XlsFile.RenderCells overload which allows to render objects and borders. Now RenderCells can render also the objects which are in the group of cells. Support for rendering linked images (camera tool) to pdf/html/etc. Now FlexCel will update the linked images when rendering to show the correct image. New on v 6.8.2.0 - June 2016 Support for preserving ActiveX controls in xlsx files. Now FlexCel will preserve ActiveX controls when you open, modify and save xlsx/m files. ActiveX objects are not converted between xls and xlsx files. Form controls are now read from and written to the Excel 2010 stream besides the Excel 2007 stream. FlexCel now reads the Excel 2010 stream for Form controls and uses it if available instead of the Excel 2007 stream. It also now writes both an Excel 2007 and 2010 stream. The Excel 2010 stream is better because it saves the coordinates in device independent units, so controls will look fine when opened in High DPI displays. Excel 2007 on the other hand uses real pixels, which results in different dimensions for the controls when opened in high dpi mode. Files created with NewFile will now not have printer settings, and the locale of them all will be English. Depending in the Excel version passed to NewFile, FlexCel could add some printer settings to the empty file, and some versions had different locales. Now all locales for files created by NewFile are US English, and there are never printer settings. Support for camera tool (linked images) in xlsx files. Now "camera tool" pictures will be preserved when saving to xlsx, and converted between xls and xlsx. They will also update when you insert rows or columns. Note that FlexCel won't update camera tool images if the cells change, but they will be updated by Excel when you open the file. New global variables TSmoothingMode.FlexCelDefault and TInterpolationMode.FlexCelDefault. Those new variables let you decide what is the default antialiasing and interpolation mode used when rendering images. Breaking Change: Autoshapes in xls files are now rendered using the xlsx definition of them if it is stored in the file. Excel 2007 and newer save the autoshapes in two different places inside an xls (not xlsx) file: One that is read by Excel 2003 or older, and the other which is read by Excel 2007 or newer. FlexCel used to read the section of Excel 2003 or older to render the autoshapes, now it is using the section of Excel 2007 or newer. While in general this should improve autoshape rendering, note that this change is potentially breaking if you have files with the correct definition in the xls section and incorrect in the xlsx section. For more information, please see: Recovery mode can now open files with invalid format strings. Now when XlsFile.RecoveryMode = true and the file has invalid format strings, FlexCel will open the file anyway and report the errors in FlexCelTrace. Bug Fix. In some border cases when opening and saving a file multiple times and adding the same format every time, the format could be added each time instead of detecting it already existed. Bug Fix. Some autoshapes with holes inside could be rendered as fully filled. Bug Fix. Improved rendering of custom xls autoshapes Bug Fix. There could be an error when saving pivot cache slicers or timelines in multiple sheets. Bug Fix. Some colors in controls or shapes in xls files could be read wrong. Bug Fix. Improved compatibility when opening invalid xlsx files. New on v 6.8.1.0 - May 2016 Bug Fix. Some xlsx files manually edited could fail to load in .NET 2, 3.5, pcl or .netcore. Added support for .NET Core RC2. Now the FlexCel for .NET core version supports (and requires) .NET Core RC2 (.NET Core SDK 1.0 Preview 1) Bug Fix. Fixed error when exporting images to pdf in UWP 10 apps. New on v 6.8.0.0 - April 2016 Support for Windows 10 Universal Apps. The dll FlexCelPortable 81 can now be used from Windows 10 Universal apps. Full support of Hyperlinks in autoshapes in xlsx files. Now hyperlinks in shapes inside xlsx files are fully preserved as they were in xls files. They also will convert between xls and xlsx, and you can change the hyperlinks of the shapes with xls.SetObjectProperty. Links are exported to pdf, html and svg. Full support for "Allow users to Edit ranges" in the API. The new methods XlsFile.Protection.AddProtectedRange, XlsFile.Protection.DeleteProtectedRange, XlsFile.Protection.ProtectedRangeCountand XlsFile.Protection.ClearProtectedRangesallow you to read and modify protected ranges. Note that for simple protection you can still just lock or unlock the cells in the cell formatting. APIMate should report now how to enter Protected Ranges too. New <#Switch> and <#IFS> tags in FlexCel reports. Those tags behave like the IFS and SWITCH functions added in Excel 2016 january update. They can eliminate "if chains" and make the expressions simpler. For example <#ifs(<#value> < 10;<#format cell(red)>;<#value> < 20;<#format cell(yellow)>;true;<#format cell(green)>)> Autosize of chart axis when rendering charts. Now when exporting xls charts to pdf/html/etc, the axis of the chart will move to fit the data in the axis so it doesn't get cut out. New parameter convertFormulasToValuesadded to PasteFromXlsClipboardFormat. This parameter will allow you to paste the formulas as values from the clipboard. This is useful specially if the formulas you are pasting reference other books, so they won't reference the correct cells when pasted. New parameter recalcBeforeConvertingadded to ConvertFormulasToValues and ConvertExternalNamesToRefErrors. This parameter will allow you to convert formulas to values without first having FlexCel recalculating the file (which was the default before). So if your file can't be recalculated by FlexCel because for example it contains links to other files that don't exist anymore, you can still convert the formulas to the latest calculated values. New overload of FlexCelPdfExport.ExportAllVisibleSheets taking a filename. This overload is a shortcut for creating a filestream, calling BeginExport on the stream, then calling ExportAllVisibleSheets and then calling EndExport. Support for ShrinkToFit attribute in cells when exporting to pdf/html/svg/images/printing/previewing. Now the ShrinkToFit attribute of cells is rendered when exporting, and will show in printing, previewing and exporting. Support for adding horizontal scrollbars with the API. There is a new property in TSpinProperties which allows to specify if the scrollbar is horizontal. APIMate will report how to do it from an horizontal scrollbar in Excel. <#IF> tag in reports can now omit the false section. You can now write a tag like <#if(true;hi)> instead of <#if(true;hi;)> Better chart rendering for xls files. Now the labels overflow in a way similar to Excel, and FlexCel calculates the chart axis positions so t won't overflow. The file created by XlsFile.NewFile(n, TExcelFileFormat.v2016) now is in the Excel "January update" format. The "January update" of Excel 2016 added some new fonts to the themes and changed the build id of a default empty file. Now the empty files that FlexCel generates when you specify v2016 include those new fonts in the themes. The build id had already been updated in a previous FlexCel release. Bug Fix. When exporting xls bar and column charts with a single data series and "Vary colors per point" = true, FlexCel was not changing the colors on each point. Bug Fix. When copying sheets with data validations to other files, and the data validations would refer to a list in a different sheet, the data validation would be copied wrong. Bug Fix. When rendering conditional formats in xls files sometimes the background color could be ignored. Bug Fix. The constructor of TBlipFill wasn't public. Bug Fix. TXlsFile.DpiForImages would be ignored in some metafiles. New on v 6.7.16.0 - March 2016 Support for the new functions introduced in the Excel 2016 January Update. Now FlexCel can recalculate and recognize the 6 new functions introduced in the Excel 2016 January Update : TEXTJOIN, CONCAT, IFS, SWITCH, MINIFS, MAXIFS. Updated the RecalcVersion for Excel 2016 to the Excel 2016 January Update. The january update of Excel 2016 changed the RecalcVersion id saved in the xls and xlsx files. This means that xls or xlsx files saved with Excel 2016 "pre-january-update" would ask for saving when opened and closed in Excel 2016 "post january update". Now when you choose the RecalcVersion in FlexCel to be 2016, FlexCel will identify the file as saved by "post-january-update" Excel 2016. This will avoid the save dialog when opening in Excel 2016 with all the updates. New value in TRecalcVersion: "TRecalcVersion.LatestKnownExcelVersion" will identify the file saved by FlexCel as the latest Excel version that FlexCel knows about. If you set xls.RecalcVersion to be TXlsRecalcVersion.LatestKnownExcelVersion then FlexCel will identify the file as saved by the latest Excel version it is aware of. Currently this means the files will be identified as saved by Excel 2016 january update. When newer Excel versions appear and FlexCel is updated to support them, then this version will automatically increase to the latest without needing to modify your source code. New property UsedZoom in TOneImgExportInfo. The property UsedZoomwill tell you the actual zoom that is going to be used when printing or exporting the sheet. So you now can call FlexCelImgExport.GetFirstPageExportInfo()and get the zoom of the pages that will be printed, including the zoom calculated for print to fit if set. Improved compatibility with invalid xls and xlsx files. Now FlexCel will fix files which have an invalid active sheet stored, and set the active sheet to the first in those cases. Improved compatibility with thid party xlsx files. FlexCel will now understand xlsx files which use absolute references like $A$3 in cell value addresses. Note that Excel never writes absolute references in the cell values addresses, but some third parties might. Now you will be able to read those files too. Bug Fix. COLUMNS DataSet would not work inside a filter. Bug Fix.. Bug Fix. Rendering of images in headers and footers in xlsx files could be wrong if the sizes in the file were in mm. New on v 6.7.12.0 - February 2016 - New properties ExcelFile.HeadingRowHeight and ExcelFile.HeadingColWidth. Those properties allow you to specify the width of the heading column and the height of the heading row when printing headings or exporting them to pdf via FlexCel. The "Custom preview" demo now sets those properties so the sizes are automatic. New on v 6.7.10.0 - February 2016 - BugFix. When using TFlexCelImgExport.ExportNext to save to a file without transparency (like JPEG), the background would be black. Now it should be transparent if the format supports Alpha channels, and white otherwise. New on v 6.7.9.0 - February 2016 - New static events TUIFont.FontCreating and TUIFont.FontCreated. Those events allow you to customize the font replacements in your system. For example, if you have Excel files that have a font "MyDeprecatedFont" and you would want to replace it by "MyNewCoolFont" when exporting to pdf, you can use the FontCreatingEvent to do so. You can use the FontCreated event to catch fonts where the original wasn't present in the machine and were substituted by the operating system into something else. You can then provide a different substitute font. New on v 6.7.8.0 - February 2016 Experimental support for .NET Core 1.0 and ASP.NET Core 1.0. There is a new nuget package included which can be used in .net core. As .net core doesn't have a graphics library yet, this package can only deal with xls and xlsx files, and has no graphics capabilities like exporting to pdf or html. New methods OffsetRelativeFormula and RecalcRelativeFormula in ExcelFile. Those new methods allow you to know the real value of a relative formula, such as those returned by names and data validations. Relative formulas depend on the cell the cursor is, so if the cursor moves the formula changes. As FlexCel doesn't have a cursor, it always returns the formulas considering the cursor at A1. With OffsetRelativeFormula you can get how the formula would look like when the cursor is at for example B3, and with RecalcRelativeFormula you can recalculate the formula and get the result when the cursor is at B3. Support for quoted column names in reports. Now you can quote a column name inside a tag in a report, like <#"db.column ) "> This can be useful if you have column names with for example unbalanced parenthesis. Note that you don't need to quote the name if it has balanced parenthesis. Bug Fix.. Bug Fix. When <#including> subreports inside FlexCel Reports with the RC option, empty row formats would be copied to non empty row formats. Bug Fix. ActiveX controls with a size larger than an Int32 would raise an Exception when loading. Bug Fix. Bidirectional reports could fill some wrong cells when using multiple master-details in the rows. Bug Fix. Xlsx files with autofilters could become invalid if you deleted the range which contained the autofilter. Bug Fix. VLookup and HLookup would return a match if you searched for a blank string ("") and the cell was empty. Excel doesn't return a match in those cases, and now FlexCel doesn't either. Bug Fix. Double bordered lines could render wrong when the zoom was big (about 200% or more) Bug Fix.. New on v 6.7.3.0 - January 2016 New JOIN and UNION commands for reports. Those commands are written in the config sheet and allow you to either JOIN the columns of multiple tables into a single table, or to do an UNION of the rows of multiple tables into a single one. See the new "Join and Union" demo. Improved bidirectional reports. Now bidirectional reports can work with rows in master detail and they also will delete empty column bands if none of the columns has records. Improved preservation of timelines in xlsx. Timelines are a feature introduced in Excel 2013, which allow you to graphically navigate a timeline in a data source. Now FlexCel should preserve timelines for pivot tables. Bug Fix. Fixed order of records specific for Excel 2010 to workaround a bug in Excel 2010. Some very complex files could raise an error when opened in Excel 2010, even when they were correct by the xlsx spec. New on v 6.7.2.0 - December 2015 Copy to the clipboard now supports html. Now there is an extra option in the formats to be copied to the clipboard: In addition to native xls (best for copying from one spreadsheet to another) and text (for apps that don't understand anything else), now you can also copy as html, which gives the best results when pasting a spreadsheet in Microsoft Word or PowerPoint. You can copy to html either using XlsFile.CopyToClipboardFormat or FlexCelHtmlExport.ExportToClipboardFormat. Bidirectional Reports. Now you can create ranges in shape of a cross that expand to the right and down at the same time. While you could do this before by splitting one of the ranges in 3, now you can directly intersect the ranges and get the correct result. Take a look at the new Bidirectional Reports demo and the documentation in the report designer guide. Changed default fallback fonts in pdf. Windows 10 doesn't come with MS Mincho or MS Gothic installed by default (you need to manually install the language packs to get the fonts). So now FlexCel looks for both MS Mincho/Gothic (for windows older than 10), and YuMincho/Gothic for Windows 10. The tags <#List>, <#DbValue> and <#Aggregate> can now work inside nested Array/Linq datasets. Now when you have a master detail relationship where the detail is a property of the master, FlexCel can find the master dataset for the <#List>, <#DbValue> and <#Aggregate> even when they are not added with AddTable. New property XlsFile.DocumentProperties.PreserveModifiedDate. FlexCel by default sets the modified date of the files it saves to the date when the file was saved. But if you want to change this date to an arbitrary date, then you can set PreserveModifiedDate to true. FlexCel will now set the creation and modification date in xls files too. Now Creation and Modification dates are stored in xls files, same as they already were in xlsx. FlexCel will now allow you to set the file creator for xlsx files. By default, files created by FlexCel are identified as created by FlexCel in the document properties. Now you can change the application creator by writing xls.DocumentProperties.SetStandardProperty(TPropertyId.NameOfCreatingApplication, "SomeNewCreator") Bug fix. LastModifiedDateTime wasn't returned correctly for xlsx files. Bug fix. Macros converted from xls files to xlsx could fail to open in Excel 2016 in some border cases. Improved Getting Started document. Now GettingStarted shows actual code examples on how to do simple tasks and contains links to all documentation. New on v 6.7.1.0 - November 2015 Support for new Excel 2016 features. While old FlexCel versions will still work fine with Excel 2016 (as expected), FlexCel now provides support for new extra features in Excel 2016. Now XlsFile.NewFile allows to create files like Excel 2016 creates by default. Also XlsFile.RecalcVersion has a 2016 option to tag your files as created by Excel 2016 so Excel 2016 doesn't ask for saving when closing them. Improved support for DataValidations that have lists with cells from other sheets. DataValidations with lists of cells from other lists were introduced in Excel 2010, and while FlexCel preserved them, it wouldn't modify them when inserting or deleting ranges. They wouldn't either be reported by the API. Now they are modified and also reported by the API, just like all the other data validations. Slicers for Pivot Tables are now preserved in xlsx. Now FlexCel will preserve the slicers for pivot tables present in xlsx files. This is a feature available only in Excel 2010 or newer, so you won't see them in older Excel versions, but the generated files will still open without errors. Excel 2010 equations are now preserved in xlsx. Now FlexCel will preserve the new equations in Excel 2010 (Ribbon->Insert->Equation) Center across selection cells are now exported to html. Now html export will export cells marked as "center across selection", same as exporting to pdf or other exports already did. Improved exporting of superscripts and subscripts to html. Now superscripts and subscripts are exported better to html files. Full support for formulas attached to textboxes or autoshapes. Now FlexCel will preserve and convert betwen xls and xlsx textboxes or shapes which have their text linked to a formula. If you modify the linked cell, the text in the textbox will change. New methods XlsFile.SheetID and XlsFile.GetSheetIndexFromID. Those new methods can be used to identify a sheet in a FlexCel session and get it back later. Note that as this ID is not saved in the file, it will change every time you load a new file and so it can only be used in a single session. New static property "UseLegacyLookup" in FlexCelReport. If you set this property to true, FlexCel will use DataViews to do lookups in DataSets instead of the new and faster internal lookup, Set this property to true only if your existing reports rely in bugs in the DataView implementation. Data validations entered manually in xls files could fail to work when opened in Excel. In some border cases, Excel would report all values as invalid for a data validation entered with FlexCel, even if the values were valid. This only applied to xls files. Now FlexCel can open xlsx files with images with the wrong image type. If an xlsx file now contains for example a png but it is declared as jpg, now FlexCel will open it as a png anyway. This will only happen with corrupt files or files generated by incorrect third-party products. Error when deleting rows in a pivot table. When deleting rows in a pivot table in an xlsx file, the rows could go negative creating invalid files. Improved compatibility with third-party tools. Workaround for some tags not understood by other third-party tools, and now we can read files missing some required records. New on v 6.7.0.0 - September 2015 Support for opening xls versions from 2 to 4. As FlexCel already supported xls 5 and up and Excel 1 doesn't exist for Windows, this completes the support for all versions of xls. While xls versions from 2 to 4 aren't in wide use, they are still used by other third-party libraries. Enhanced High DPI Support in FlexCelPreview. Now FlexCelPreview supports High DPI in Windows, besides iOS or OSX as it already did. Breaking Change: Property Resolution in FlexCelPreview has been removed. The property Resolution of FlexCelPreview has been removed because now FlexCelPreview automatically adjusts to the resolution of the monitor. Full support for background images in a sheet. XlsFile adds two new methods to deal with background images in a sheet: SetSheetBackground and GetSheetBackground. Background images are now converted between xls and xlsx. ApiMate will also report the code to add a background image to a sheet. A new property ExportSheetBackgroundImages allows you to print or export the background images. (note that Excel never prints the background images, so this property is false by default) Full support for manipulating Custom XML parts with XlsFile. The new methods CustomXmlPartCount AddCustomXmlPart, GetCustomXmlPart and RemoveCustomXmlPart in XlsFile allow for reading and writing the custom xml files of an xlsx files, as explained here: ApiMate will now show how to enter custom xml parts in an xlsx file. New property for PDF files: InitialZoomAndView. The new InitialZoomAndView property allows you to specify the initial page and zoom when opening the document. New property for PDF files: PageLayoutDisplay. The new PageLayoutDisplay property allows you to specify if to display one or two pages, and continuous scrolling or one page at a time when opening the document. Two new modes for PDF files PageLayout. Now generated PDF files can use a PageLayout of TPageLayout.OptionalContent to show the optional content panel, or TPageLayout.AttachmentPanel to show the attachments panel. New property ScreenScaling in XlsFile. This new property allows you to workaround Excel bugs when working in high dpi displays. For more information read Better handling of stored numbers in xlsx. Now numbers are saved in xlsx with a roundtrip format, which ensures the number we write in the file is exactly the same number that will be read. Ability to <#insert> empty names in reports. Now when you use the <#include> tag in a report, you can leave the name to include empty. This will mean to insert all the used range in the active sheet. New overload for XlsFile.DeleteRange. There is a new option for XlsFile.DeleteRange, which will clear the cells but not the formats on it: It will behave similar to when you press "Delete" in a range of cells in Excel. New property ExportEmptyBands in FlexCelReport. ExportEmptyBands replaces the existing ExportEmptyRanges property which has been deprecated. It allows you to choose between 3 possibilities when the data table has 0 records: Delete the range and move cells up, clear the data and format of the range, or clear only the data. Bug Fix. Now FlexCel will make sure the xml declaration in the custom xml parts added with AddCustomXmlPart have the same encoding as the encoding being used to store the file. Bug Fix. Xlsx files with external formulas referring to other sheets starting with a number weren't quoted, and Excel would report an error when opening those files. Bug Fix. FlexCel would fail to load files with formulas which pointed to tables in other files with the new table formula syntax. Breaking Change: Improved lookup tag in reports. The <#lookup> tag in reports has been rewritten to be faster and behave better. IF you are defining your own VirtualDataSets and overriding the Lookup function you might need to rewrite it, as parameters changed. But with the new base lookup implementation that is now available for all, you might just remove the override and use the base. Bug Fix. Subtotal function could recalculate wrong in border cases. SPLIT Datasets in Reports can now be used as datasets for sheets. This allows you to overflow a report into multiple sheets. When the data in a sheets reaches the maximum of the split, it will continue in a different sheet. A new sample "Overflow sheets" shows how to do it. Copy to clipboard wasn't working in Excel 2013. We modified the clipboard format so now it is working. Bug Fix. When inserting or deleting columns, array formulas located in other sheets might not update to take in account those changed rows or columns. Bug Fix. Sometimes when moving a range array formulas which pointed to that range might fail to update. Bug Fix. Some functions with array arguments could not be calculated correctly when the formula was not an array formula. Bug Fix. The lookup tag introduced in 6.6.32 could fail if the lookup value was a tag in the template Bug Fix. The functions SumIfs, AverageIfs and CountIfs could give wrong results in some cases. Bug Fix. When rendering a chart with an image inside, there could be an exception. Bug Fix. Images inside charts with negative coordinates weren't rendered. Bug Fix. Now scatter charts behave like Excel, and if any of the x-axis values is a string, it will be rendered as a line chart instead. Bug Fix. XlsFile.SetAutoRowHeigth wouldn't work if the row was empty. Bug Fix. Chart rendering now renders charts where all values are 0. Bug Fix. Chart rendering now respects the label positions next to axis, high and low. Bug Fix. ExportEmptyBands introduced in 6.6.25 wouldn't work in detail reports. Bug Fix. In some cases when generating reports and exporting them to pdf directly without saving them as xls/x, there could be a range check error. Bug Fix. Tabs inside text in autoshapes now will render as 8 spaces. (note that we don't use the tab definitions from the autoshape, so this is an approximation) Bug Fix. When exporting to bitmaps, the bitmaps where a little bigger than the page size Bug Fix. Reports using LINQ could raise an Exception in some cases with null values. Improved compatibility with invalid xlsx files generated by third parties. FlexCel can now read some invalid formulas written in xlsx by other third-party products. New on v 6.6.23.0 - April 2015 Visual Studio 2015 and .NET 4.6 support. Support has been added for the latest Visual Studio and .NET betas. Fix for the latest Xamarin version. Xamarin changed how encodings behave when they don't exist: Before they used to raise an Exception and now they return null. This broke the fallback support in older FlexCel versions, and has been fixed now. Bug Fix. There could be an error when rendering error bars in charts and there were missing values. New on v 6.6.22.0 - April 2015 New property SheetView in XlsFile allows you to set the page view mode and zoom for each mode. Now you can see or set the page view mode in a sheet (normal, page layout or page break preview). You can also specify the zoom for each of the modes. As usual, APIMate will show you the syntax. New property LinksInNewWindow for FlexCelHtmlExport and FlexCelSVGExport. When you set LinksInNewWindow to true, both FlexCelHtmlExport and FlexCelSVGExport will export the hyperlinks in the file to open in a new window. Links to local files and to current workbook are now exported in TFlexCelHtmlExport. Now links to local files or other cells in the current workbook are exported to html. This allows for navigating inside a file. Links in the current workbook work even when exporting to different tabs. Breaking Change: XlsFile.AddImage(row, col, TUIImage) now takes in account the declared image dpi. Now if you are calling AddImage without specifying the dimensions, FlexCel will use the dimensions corrected by the dpi declared by the image. This is the same way Excel works. In previous FlexCel versions we always assumed a 96dpi image. Rendering of error bars in xls charts. Now when exporting to pdf/html/etc, FlexCel will draw error bars. All modes (StdErr, StdDev, fixed, percent, custom) are supported. Improved display of line charts. Now colors and sizes of lines in xls charts will be read from the new xlsx records on it if they exist. This leads to a more faithful rendering, because the xlsx records have extra information, like for example a line width that isn't restricted to 4 sizes. TXlsNamedRange.GetRanges is now public and documented. GetRanges will return an array with the ranges composing a name. So if you for example have a name with the range "1:1, A:A", GetRanges will return an array with 1:1 and A:A. This method can be used to parse the PRINT_TITLES range. Improved display of markers in charts. Now markers in charts render much more alike Excel 2013, with the new options for images, etc. Bug fix. XlsFile.FillPageHeaderOrFooter could return an extra "&" character at the end in some cases. New on v 6.6.21.0 - January 2015 Better Xamarin package for osx. The Xamarin package used to copy the files in the "osx" folder, now it copies to "mac" to comply with the new naming. New UsePrintScale property in FlexCelHtmlExport. If you set the new property FlexCelHtml.UsePrintScale to true, then the exported html will use the scaling of the printed sheet instead of being exported at 100% zoom. Bug fix. Some JPEG images weren't recognized as such. Bug fix. Reports might not read expression values when tags had a default value, like <#value;0> Bug fix. Sometimes FlexCel could fail to load an xlsx file with different images with the same extension but different case (like image1.png and image2.PNG) New parameters in FlexCelPdfExport.AfterGeneratePage and BeforeGeneratePage events. The new parameters are the XlsFile being exported, the FlexCelPdfExport component doing the export, and the current sheet. Improved RecoveryMode. Now FlexCel can recover more types of wrong files when RecoveryMode is true. When drawing xls charts, we now use the options for not plotting empty cells. This option was introduced in Excel 2007, and FlexCel was ignoring it. Now if you choose not to ignore hidden rows or columns, the chart will render as expected. New method XlsFile.RemoveUserDefinedFunction. XlsFile.RemoveUserDefinedFunction allows to unregister a previously registered UDF for recalculation. Breaking Change: Now when drawing chart labels that have N/A! error as result, FlexCel won't draw them. Excel 2003 or older is different in the way it draws #NA! errors in chart labels from Excel 2007 or newer. In older Excel versions, the label would just draw as #NA!. In newer Excel versions, it doesn't draw. To be consistent with more modern Excel versions, now FlexCel won't draw them either when exporting to pdf or html. FlexCelReport can use also arrays besides IEnumerable for detail bands.When using IEnumerable as data source, now you can use a fields which are array instead of an IEnumerable as detail tables. New NOGRAPHICS define. If you define NOGRAPHICS and undefine GDIPLUS in the FlexCel project properties, you'll get a build which doesn't depend on any drawing engine. New DOTNETZIP define. If you define DOTNETZIP in the FlexCel project properties and add a reference to it, you'll get a build which uses dotnetzip instead of System.IO.Compression. Bug fix. In some cases, when pasting a file with autofilters from Excel you could get a range error. This is because Excel copies the filter in its totality and part of the filter might be outside the range copied. Now FlexCel will resize the autofilter if it extends beyond the copied range. New on v 6.6.11.0 - December 2014 Support for recalculating 31 new functions introduced in Excel 2013. Support has been added for: DAYS, ISOWEEKNUM, BITAND, BITOR, BITXOR, BITLSHIFT, BITRSHIFT, PDURATION, RRI, ISFORMULA, SHEET, SHEETS, IFNA, XOR, FORMULATEXT, COT, ACOT, COTH, ACOTH, CSC, CSCH, SEC, SECH, ARABIC, BASE, DECIMAL, COMBINA, PERMUTATIONA, MUNIT, UNICHAR, UNICODE Subtotal command in XlsFile. There is a new command xls.SubTotal(...) which works the same as the command "Subtotal" in the Excel Ribbon, "Data" tab. While you shouldn't use this when creating new files, it can be useful for formatting old files. For new files, it is best to just create the subtotals in place. New option "ExcelLike" in XlsFile.Sort. Now when doing a XlsFile.Sort command you can choose between the correct way to handle formulas (this was the only option before) or the "Excel" way of handling formulas, where references are not updated when a row is moved of place in the sort. The ExcelLike mode doesn't adapt formulas that reference those rows, but it can be much faster for tens of thousands of records. New methods IsRowMarkedForAutofit and IsColMarkedForAutofit in XlsFile. The new methods will return true if a row or column was marked for autofit. New property ExcelFile.AllowEnteringUnknownFunctionsAndNames. If you set this property to true, you will be able to enter unknown functions inside formulas, like "=SomeText()". Excel will show the result as a #NAME! error. When this property is false (the default) FlexCel will raise an Exception if the name is not know, which is better to detect misspells. New properties XlsFile.RecalcVersion and FlexCelReport.RecalcVersion. This new properties allow you to specify the Excel version that last calculated the file. If you set it to for Example Excel 2010, any Excel newer than Excel 2010 will recalculate the file on open, and ask for saving changes when you close the file. Excel 2010 or older won't recalculate the file on open. If you want every version of Excel to recalculate on open set this property to AlwaysRecalc (the default). Look at the API developers guide for more information. Breaking Change: XlsFile.RecalcForced and FlexCelReport.RecalcForced properties have been removed. RecalcForced used a way to make files recalculate on open which has been deprecated in newer versions of Excel, and will cause validation errors with the generated files. For this reason, RecalcForced wasn't doing anything in the last couple of years. Look at the new RecalcVersion property if you were using RecalcForced and want new not deprecated way to create files which Excel will recalculate on open. Improved Xamarin Unified API support. Changed the unified API support to compile in the latest beta. Included reports can now reference the formats of the parent report. Now an included report can reference the formats of the parent report, same way as it can reference the expressions. New overloads of XlsFile.GetObjectProperties and XlsFile.GetObjectAnchor that take an object path. This new methods allow you to access the properties and anchor of an object by specifying its name, as in Xls.GetObjectAnchor(-1, "@MyObject") Autofitting columns with 90 degree rotation would work always as if the column had "Wrap text" enabled. When autofitting columns which had a rotation of 90 degrees, FlexCel would always try to wrap the text so it fitted in many lines, even if the cell wasn't set to wrap. Now it will only do this if the cell has "Wrap text" on. Breaking Change: Removed Xamarin Android 2.2 support. As Froyo (2.2) is now deprecated, we've removed this support in order to avoid deprecated warnings. Now minimum supported is 2.3 (Gingerbread). Bug Fix. When doing reports with Linq, aggregating a double field could raise Exceptions. Pivot tables in xlsx are now copied when you copy sheets. Now if you InsertAndCopySheet(...) a sheet with a pivot table from an xlsx file, the table will be copied. (pivot tables in xls were already copied) Unknown names in formulas now return #NAME! instead of #NA!. Now when a formula references a name that doesn't exist, FlexCel will return #NAME! as the formula result, instead of #NA! as it used to do. Bug fix. When setting the text of an object using SetObjectText the font might not be preserved. Bug fix. When changing the font in HtmlFont event in TFlexCelHtmlExport there could be an Exception. Bug fix. RoundUP and RoundDown functions could return the same number and not the rounded up number in some cases when the number of digits was negative. Bug Fix. Rendering some files with thousands of hidden columns could take too long. Hidden rows could sometimes count when finding the maximum used column in the sheet. When printing or exporting an xls/x file, a hidden row with columns outside the printing range could in some cases cause the maximum column to be that in the hidden row, which wouldn't be printed. Improved compatibility with invalid xls files. Now FlexCel can read some more invalid xls files created by third parties. Sheet names aren't always quoted when returning formula text. In older FlexCel versions, the sheet was always quoted in formulas. So if you retrieved for example the formula in A1, it could be 'Sheet1'!A2. Now we quote the sheet only if needed, same as Excel does. So we would return Sheet1!A2 instead. New convenience constructor for XlsFile which takes a Stream. Now you can create an XlsFile and open a stream in a single operation, without having to first create the XlsFile and then call xls.Open. Improved error message when opening files with 0 bytes. Now when opening files with 0 bytes or streams with the position at the end, FlexCel will say a clear message instead of saying that the file format isn't Excel or newer. New properties FlxConsts.MaxRowCount and FlxConsts.MaxColCount. Those properties return FlxConsts.Max_Rows + 1 and FlxConsts.Max_Columns + 1 respectively. Max_Rows and Max_Columns were zero based, so for example Max_Rows return 65535 for xls and not 65536 which is the row count. The new properties return the one-based maximum, which makes it simpler to work in the one-based FlexCel API. New on v 6.6.2.0 - October 2014 Better rendering of text in rotated shapes. In Excel 2003 or older, text in rotated shapes was shown without rotation and that's how FlexCel would show them. Since Excel 2007 the text can rotate with the shape, using an undocumented record in xls. Now FlexCel can read it and will honor that setting when converting to pdf/html/svg/printing/etc. Bug Fix. A local link in a pdf to a page that wasn't exported could cause an Exception. Bug Fix. Exporting "Center on selection" cells could be too slow in border cases. Bug Fix. XlsFile.SetCommentRow could set the wrong comment in some cases. New on v 6.6.1.0 - October 2014 Generic reports using <#table.*> can now use user defined functions. Now you can apply an user defined function to a <#table.*> tag. Generic reports using <#table.*> can now reference fixed fields in the table. Now you can mix <#table.field> with <#table.*> in the same cell. Better compatibility with files created by third parties. Now FlexCel will load invalid xlsx files with repeated comments. Bug Fix. Generated xlsx files could be invalid when removing frozen panes from an existing file. New on v 6.6.0.0 - October 2014 Breaking Change: Now the result of ShapeOptions.Text is a TDrawingRichString instead of a TRichString. In order to allow more customizability in the text of shapes and objects, we had to move the text property from a TRichString (which is used for cells, and in xls was also used for objects) to a TDrawingRichString (Which in xlsx offers more possibilities to customize the text). As there is an automatic conversion from a TDrawingRichString to a TRichString, most code will just keep working. But there might be some cases (like functions where you pass a var parameter) where you will need to change the types form TRichString to TDrawingRichString in order to compile. Better support for preserving autoshape text in xlsx. Now when you change the text of an autoshape in xlsx, the existing properties of the text will be preserved. Support for reading and writing a cell's text direction (RTL, LTR or Context). Now you can specify the text direction in a cell, and APIMate will show you how to do it. The FlexCel rendering engine also now supports better RTL code (still without providing official RTL support, it is better in this version and usable in most cases) Support reading the number of horizontal and vertical page breaks in a sheet. Two new properties: XlsFile.HPageBreakCount and XlsFile.VPageBreakCount return the count of page breaks in a sheet. Bug Fix. XlsFile.LastFormattedCol returned the last formatted column - 1. Now it is returning the correct number. Bug Fix. Rendered xls charts could show an extra line in some corner cases with missing data. Bug Fix. TOPN datatables didn't inherit their relationships with master datasets. Bug Fix. Macro references in buttons could be copied wrong when copying sheets. Bug fix. When copying a range of cells to another sheet which included formulas introduced in Excel 2007 or newer there could be an error when saving as xls. Bug Fix. FlexCel enforced a maximum of 1023 manual page breaks for xls but not for xlsx. Now We also check that the generated xlsx files don't have more than 1023 manual page breaks, since that would crash Excel. New on v 6.5.0.0 - September 2014 PDF/A support. FlexCel can now export to PDF/A-1, PDF/A-2 and PDF/A-3 files. A new property FlexCelPdfExport.PdfType determines if the file is a standard PDF or the version of PDF/A. Xamarin Unified API Support. FlexCel now includes two new dlls compiled against Xamain.Mac.dll and Xamarin.iOS.dll instead of XamMac.dll and monotouch.dll, in order to support the new Unified API ( ) This means you can now compile 64 bit iOS and OSX applications with FlexCel. Breaking Change: Generated PDF files are now tagged by default. The files generated by FlexCel are now tagged by default, as tagging is an accessibility requirement. Tagged PDF files are bigger than normal files so in order to try to get smaller files FlexCel uses now features available only in Acrobat 7 or newer. To go back to generating untagged files you can set FlexCelPdfExport.TaggedPdf = false. To go back to creating files compatible with Acrobat 5 or newer, set FlexCelPdfExport.PdfVersion = TPdfVersion.v14 Breaking Change: Generated PDF files are now compatible with Acrobat 7 or newer. In order to reduce the size of the tagged pdf files that FlexCel now creates by default, FlexCel now generates files that need Acrobat 7 or newer to open. To go back to creating files compatible with Acrobat 5 or newer, set FlexCelPdfExport.PdfVersion = TPdfVersion.v14. Note that as PDF/A-1 requires compatibility with Acrobat 5 or newer, when exporting PDF/A-1 FlexCel will use v14 automatically. PDF/A-2 and 3 don't require v14, so it isn't used by default for those formats. Breaking Change: Generated PDF files now embed the fonts by default. Now the default value of FontEmbed in FlexCelPdfExport and PdfWriter is TFontEmbed.Embed. While this will create slightly bigger files, they will show fine everywhere, including mobile devices which might not have the fonts. You can revert to the old behavior by changing FontEmbed to be TFontEmbed.None. Breaking Change: FlexCel will throw an Exception when trying to embed a font that doesn't have a license allowing embedding. FlexCel will now check that the embedded fonts in PDF have a license that allows embedding. You can revert to the old behavior by setting UnlicensedFontAction = TUnlicensedFontAction.Ignore, in case you have an agreement with the Font author. You can also set UnlicensedFontAction = TUnlicensedFontAction.Replace to replace the unlicensed fonts with a fallback font. FlexCelTrace will alert when replacing or ignoring a font that is not licensed. Ability to embed files inside the PDF. Now you can embed arbitrary files inside the pdf. This allows for example to ship the original xls/x file inside the pdf. This is supported also in PDF/A-3. Ability to set the language of the PDF files. You can now set a FlexCelPdfExport.Properties.Language to specify the language of the generated PDF file. Note that the language will be used by text-to-speech engines to read text out loud, so it is recommended to set this property. PDF properties are now saved in XMP format. PDF properties (like Author, Title, etc.) are now saved in XMP xml format besides the PDF format. XMP is a requirement for PDF/A, and allows other tools that don't understand PDF to read the metadata. Note that the files generated by FlexCel will be now a little bigger due to this metadata, because it can't be compressed. Ability to embed a Color Profile inside the generated pdf files. You can now set a FlexCelPdfExport.EmbedColorProfile property to embed a color profile in the generated files. Note that as a color profile isn't required and it increases the size of the generated files, this option is false by default. But as it is required by PDF/A, a color profile will be embedded in PDF/A files. Breaking Change: Now if you don't specify properties for pdf files in FlexCelPdfExport (like Author, Title, etc.), those will be read from the Excel file being exported. If you want to revert to the old behavior, you can set UseExcelProperties = false in FlexCelPdfExport. New structure StandardMimeType returns the mime types for xls, xlsx, xlsm, pdf, etc. You can use StandardMimeType where you need to specify a mime type for a file generated with FlexCel, instead of having to manually search for the type. Support for <#DBValue> tag in LINQ Reports. DBValue used to work only with datasets, now you can use it also with LINQ data providers like arrays or lists. Breaking Change: VirtualDataTable doesn't have the MoveToRecord virtual method anymore, and instead it has a new GetValue(row, column). In order to provide an efficient support for dbvalue in LINQ reports, we needed to change the way to move to a random record. So we don't use MoveToRecord anymore an use GetValue(row, column) instead. This change is unlikely to affect you, unless you are writing your own VirtualDataTable descendent. If you are, then the compiler will point at the missing method implementation. Improved Search and Replace. Now FlexCel preserves better the format of the cells being replaced. A new overload of XlsFile.Replace allows you to specify the format or the values of the replaced cells in a cell by cell basis. Support for entering names referring to other files using Excel notation. A normal reference to another file has the filename inside brackets, like "[file1.xlsx]Sheet1!A1". But in the case of global names, Excel uses the notation "file1.xlsx!name1", without brackets, which makes it impossible to know if you are entering a name reference to another file (file1.xlsx) or a name reference to the same file, in a sheet named file1.xlsx. FlexCel didn't allow this way to specify the names, and it used to ask for brackets always so you would have to write [file1.xlsx]!name1 to enter the name. Now you can use the same notation as Excel, and FlexCel will allow it as long as you setup a TWorkbook before which includes file1.xlsx. Support for format strings that specify fractions. Now when using a format string like "??/??" the numbers will be displayed as fractions. For example 0.75 will show as 3/4. All Excel formats for fractions are fully supported. New constructor for XlsFile allows to specify the Excel version in one step. Now you can create a new file in for example Excel 2010 file format by writing XlsFile xls = new XlsFile(1, TExcelFileFormat.v2010, true); in C# or xls := XlsFile.Create(1, TExcelFileFormat.v2010, true); in Delphi. New enumeration TExcelFileFormat.v2013. We now provide an specific TExcelFileFormat.v2013 enumeration to create Excel2013 files. iOS and OS/X previewer compatibility improved. Some xlsx files generated by FlexCel that wouldn't show in iOS/OSX previewer will display now. Xlsx charts now update their caches so the previewer will show them correctly. TFlxApplyFont as a new StyleEx property that allows for fine control of which styles are applied. Before this release you could only apply the full style of the font or nothing by changing the Style property. Now you can specify individual styles like bold or italics by changing the StyleEx property. XlsFile.Sort does a stable sort. Now when you sort a range of cells, order will be preserved for items with the same values. Bug Fix. Local named ranges could lose their sheet when inserting sheets from other file. Shapes inside charts are now preserved in xlsx files.. Now xlsx charts will preserve the shapes inside. Ability to preserve modification date in xlsx files. By default, FlexCel will set the modification date to the date the file was saved. But if you are modifying an existing file and want to preserve the original creation date, you can now do it by setting XlsFile.DocumentProperties.PreserveCreationDate to true. Better support for Excel 4.0 macro sheets. Files with Excel 4.0 macros should load better. Bug Fix. XlsFile.Replace might not keep existing cell formats when replacing dates. Bug Fix. Chart.DeleteSeries could break the format of the remaining series when called in a serie at the middle of the chart. Bug Fix. There could be an exception when deleting some ranges of cells with hyperlinks. Bug Fix. Negative dates when in 1904 mode used to display as ####. Now they display as in Excel (see ). Note that this is not a logical way to display dates, that is -1 doesn't mean 12/31/1903, but it means "-1/2/1904". Negative dates actually increase as the number get smaller. Support for UTF16 surrogates when exporting to pdf. Now when exporting to pdf, FlexCel will correctly display UTF16 surrogates. FlexCel already was surrogate-aware in the rest of the codebase. Support for space (" ") named styles. While Excel won't let you enter a cell style named " ", it will allow you to use it if you manually edit an xlsx file and create it there. To be able to deal with those files, FlexCel will now support reading and writing styles named with a space. Now when adding controls with linked cells, the linked cells will be modified to match the initial value of the control. Now when adding comboboxes, listboxes or checkboxes linked to a cell, the cell will be modified to match. Note that the change applies to newly created objects, if you change the value of an existing control, the linked cell was always updated in all FlexCel versions that supported changing control states. Bug Fix. Some xlsx files with charts could enter an infinite loop when loading. Bug Fix. When replacing rich strings, the rtf runs could be wrong in border cases. New on v 6.3.1 - May 2014 - Improved image rendering. Some files created by third parties could display the images in the wrong position. New on v 6.3.0.0 - April 2014. New on v 6.2.1.0 - March 2014 Improved default font in Headers and footers. In previous versions if no font was specified for the headers or footers, FlexCel would default to Arial. Now it defaults to the normal font in the file. iOS and Android pdf encoding handling. Now FlexCel will create pdf files without using Win1252 encoding, which isn't included by default in Xamarin for iOS or Android. New on v 6.2.0.0 - February 2014 Support for preserving PowerPivot tables in xlsx. Now PowerPivot tables will be preserved in xlsx. Improved Excel 2013 support. FlexCel could fail to open some complex Excel 2013 xlsx files. Improved handling of dates between 1900-1-1 and 1900-2-28. Excel considers 1900 to be a leap year, even when it wasn't. ( look at ) As FlexCel uses the .NET DateTime which correctly assumes 1900 wasn't a leap year, dates between 1900-1-1 and 1900-2-28 would appear in FlexCel as one day before the dates in Excel. Now FlexCel corrects those dates so they look as in Excel, but the DateTime datatype still doesn't have Feb-29-1900, so that date will still be wrong. It is still advised to not use dates before march-1-1900 when working in Excel. Support for running in machines with FIPS 140 enabled. Now FlexCel can be used in machines with FIPS 140 policies enforced. ( see ) New static events in TPdfWriter. There are 3 new static events: GetFontDataGlobal, GetFontFolderGlobal, OnFontEmbedGlobal. Those work like the already existing GetFontData, GetFontFolder and OnFontEmbed, but being static, they work an application level. If you set them, you don't need to set them for every TPdfWriter or TFlexCelPdfExport instance you create. Improved rendering of formatted numbers. Some formatting strings (for example "general;-general") were not rendered like Excel. Support for forcing codepage in Excel95. Now you can force a codepage when opening an Excel 95 file with xls.Open(..., Encoding). While normally you don't need to specify a codepage for xls95 since it is specified in the file, if the file doesn't have a codepage record or has it wrong, you can now specify it here. Support for displaying numbers in Engineering notation. When displaying numbers in Scientific notation, FlexCel would always use normalized notation ( ). Now it can also use Engineering notation if the format string specifies it ( ) SheetProtection is copied when copying sheets from one file to another. Now FlexCel will copy the sheet protection when you are copying sheets from other file object. Bug fix. The rendering engine could fail to draw the top gridline in pages after the first when PrintGridLines was true and you were repeating rows at the top. Bug fix. When opening xls files with data validations and saving them as xlsx, some relative ranges could point to an incorrect cell range in the xlsx file. Bug fix. Charts in xlsx files didn't preserve textures. Bug fix. There was an error when recalculating the =LARGE and =SMALL functions in non contiguous ranges of cells. Bug fix. Sometimes AddFormat() could repeat a format twice in the file. Bug fix. In certain cases when a macro name had a dot "." on it, FlexCel could fail to open the file. Improved 3rd party compatibility. Improved generated xls files so they can be loaded by some 3rd party tools. New on v 6.1.0.0 - September 2013 Improved chart rendering. While the chart engine still can only draw charts in xls files at the moment, it can now read some embedded xlsx records in the xls file so the chart will display like Excel 2007 or newer, not like Excel 2003. Improved xlsx autoshape rendering and conversion. Now autoshapes in xlsx files are converted better to xls, and also render more faithfully. Improved rendering in iOS and Android. Now texture bitmaps are also supported in iOS and Android besides Windows. iOS7 support. Changes to better support iOS7.Some records in xlsx files have been changed so the iOS7 viewer can show the files. Visual Studio 2013 RC support. VS 2013 RC is now supported. Improved mobile documentation. New demos added for Android and iOS. Reviewed and improved the documentation. Bug fixes. Small bugfixes. New on v 6.0.0.0 - August 2013 Cross Platform support. FlexCel has gone through a big review to make it cross platform. It now runs in Xamarin.iOS, Xamarin.Android and Xamarin.Mac. Read, write, modify and export to pdf or html your Excel files from any iOS, Android or OSX device. Support for the new Excel 2013 xlsx encryption. Xlsx files encrypted with Excel 2013 can now be opened. Reduced Memory usage. FlexCel 6 will use from about 1/2 to 1/4 of the memory FlexCel 5 used. We've done a big rearchitecture of the code to ensure it runs fine in memory-limited devices, and this improvement is also available for Windows. More conformant xls files. All xls files created by FlexCel now pass the Microsoft Office validator if the original file passed it. Note that not all xls files created by Excel pass the Office validator. In OSX and iOS the pdf engine doesn't need more access to the "Fonts" folder. Now the pdf engine in OSX and iOS an get the fonts directly from memory. Support for changing how FlexCel displays the internal numeric formats. Excel has some internal numeric formats that aren't stored in the file, and different versions of Excel in different languages might show them different. For example format 37 is defined in some Excel versions as "#,##0 $;-#,##0 $" while in others is defined as "#,##0 $;(#,##0) $", (showing negative numbers in parenthesis instead of with a minus sign). The best is not to use those formats that will display different depending on the Excel version, but if you need to make FlexCel behave like one specific localized version, you can change those formats with the static method XlsFile.SetBuiltInFormat(...) Support for recalculating XIRR and XNPV functions. XIRR and XNPV are now recalculated. New startPageToExport and totalPagesToExport parameters in TFlexCelPdfExport.ExportSheet. Those methods allow to control how many pages are exported. Many improvements and small bug fixes. There are too many small changes to be mentioned here, but there is hardly any aspect of the library that hasn't been improved. Breaking Change: Compact framework is no longer supported. Due to all the work to support the new platforms, we had to so some cleaning. Compact framework required a lot of effort to maintain because it lacked too many features, and it isn't being developed Anymore by Microsoft. Breaking Change: System.Drawing classes have been replaced by internal TUIClasses in the API. This means for example that System.Drawing.Color is now TUIColor. This change shouldn't break much code since those classes weren't used much in the API, and also because there is an implicit conversion between System.Drawing.Color and TUIColor. We've made a lot of effort to try to minimize code changes, and chances are that you won't need to change a line of code, but some corner cases might still happen. This change was necessary because most of the newer .NET variations don't include a System.Drawing namespace, or it is incomplete. New on v 5.7.18.0 - Bug fix. Images could become invalid when copying sheets between different files. New on v 5.7.17.0 Improvements in FlexCelPreview. New AutofitPreview property automatically resizes the preview so if fits to width, height or page. See the "Custom Preview" demo for more information. Now by default the preview scrolls past the last page, enough to allow you to select any page. You can go back to old behavior by setting the "EndPreviewAtLastPage" property to true. New AutofitPreviewOnce method allows you to autofit the preview just once. New method MaxPageSize will return the maximum width and height of the pages in the preview. Bug fix. Sometimes when changing some printoptions with the API, number of copies might be undefined. It will be 1 now in those cases. Bug fix. Now FlexCel can read xlsx files with invalid timestamps. New on v 5.7.16.0 Support for xltx and xltm files. Now you can save xltx and xltm "template" files as you could save xlt. The property TExcelFile.IsXltTemplate will tell you if the file you opened was or not a template and you can change it in order to save templates. Note: FlexCel will automatically save the files as templates when you save to a file with extension xlt, xltx or xltm, no matter the value of the IsXltTemplate property. You only need to set it when saving to streams. New method TExcelFile.RecalcRange. RecalcRange can take any formula that evaluates to a range of cells and return the array of rectangular ranges. New overload for GetDataValidation. The method TExcelFile.GetDataValidation can now also return the range where the data validation is applied. You can use this range as an offset to call the new TExcelFile.RecalcRange method and convert the text formula in a data validation to a range of cells. New on v 5.7.14.0 New properties in FlexCel preview. FlexCelPreview has new properties for customizing how the page looks like: ShowThumbsPageNumber, PageShadowSize, PageShadowColor, PageBorderColor, PageBorderWidth, PageBorderStyle, PageNumberBgColor, PageNumberSelectedBgColor, PageNumberTextColor, PageNumberSelectedTextColor, Resolution. New methods GetSheetSelected and SetSheetSelected in XlsFile. These two new methods allow you to know or set which sheets are selected in a file. Different from XlsFile.ActiveSheet, you can select multiple sheets in a single file. Small improvements. RoundUp and RoundDown functions now ignore roundtrip digits like Excel. Names that evaluate to rectangular coordinates now can show top, left, right and bottom coordinates even if they are formulas that evaluate to rectangles, and not direct ranges. Bug fixes. The preview rectangle representing the page was a little larger than what it should be. Html exporting bug when trying to export some files with 16384 columns. Improved compatibility with xlsx files created by third parties. New on v 5.7.10.0 - Performance improvements. Rendering a file with thousands of merged cells is much faster now. New on v 5.7.9.0 - Bug Fix. Fixed a bug in the preview component. Bug was introduced in 5.7.6.0. New on v 5.7.8.0 - Bug Fix. References to external files when saving complex xlsx files which were converted from xls or manually created could be invalid. New on v 5.7.6.0 - May 2012 Improved virtual mode. Virtual mode now can skip sheets you don't need to read, and also stop reading the file as soon as you have read all the values you need. Look at the improved "Virtual Mode" demo for more information. New Recovery Mode. A new property in XlsFile, "RecoveryMode", tells FlexCel to try to ignore many common errors in corrupt files so you might be able to open them. Buf fixes. Improved compatibility with complex and third party created files, fixes in the preview component and many other small fixes and improvements. New on v 5.7.2.0 Bug fix. TRangeCopyMode.OnlyFormulasAndNoObjects would copy objects when copying from one file to another. Bug fix. Offset function would return error when recalculating if you used missing arguments in the 2 last parameters. New on v 5.7.1.0 Bug fix. Some complex merged/span cells could cause wrong results when exporting to HTML. Bug fix. XlsFile.AddSheet added a sheet before the last position, not after the last one. Now it behaves as expected. New on v 5.7.0.0 - March 2012 Replaced System.IO.Packaging by fully managed native implementation. The new classes are faster than System.IO.Packaging and also fix some bugs present on it. For example, it doesn't use IsolatedStorage, so it is thread safe for xlsx files bigger than 10 mb and doesn't require special permissions to access IsolatedStorage. Also it fixes many issues in the Mono System.IO.Packaging implementation, allowing FlexCel to officially support xlsx under Mono. IMPORTANT: If you are creating very big xlsx file in threads, please make sure to update to this version. Support for xlsx in mono and .NET 2.0. Now xlsx is fully supported in both Mono and .NET 2.0. ATLEAST tag for the config sheet in reports. AtLeast ensures a datasource has at least n records, and if it hasn't it will return a default value for the records between Count(Datasource) and n. See the documentation in "Using FlexCel Report" pdf. Master detail reports can now be defined with 2 ranges that expand to the same range. Now if you need for example a Master range and a Detail range in the same row, you can make the Master range bigger than Detail (for example Master = A1:B1 and Detail = A1). While both ranges are the same (both A1:XFE1), FlexCel now can realize the bigger range is the master and use it to know the master-detail relationship. Before you would get a "ranges intersect" message if Master wasn't actually bigger than detail. New "Text Qualifier" parameter when importing csv files. Now when using XlsFile.Import(...) to import a text file, you can define a text qualifier different from a double quote ("). The text qualifier is used when the delimiter is part of the field and so it must be quoted. New property ExcelFile.PrintLandscape. Allows for setting landscape printing easier than with PrintOptions. Bug fixes. When exporting to PDF and dpi wasn't 96 and using "XP Style dpi scale" the resulting PDF might be wrong. Issues when printing all sheets in a workbook and skipping the first pages. Allowed to read duplicated row records so some invalid files can be read. Bug Fixes. Now recalculation works more like in Excel in Lookup and Match functions. Bug Fixes. Now FlexCel can read invalid xls files which have wrong strings. PdfExport could raise an Exception rendering 0 pixel width metafiles. Bug Fix. Some numbers near the maximum possible in Excel (1e308) could be stored with reduced precision. Bug fixes. Small issues in chart preserving. Speed issues when rendering big metafiles. New on v 5.6.0.0 - November 2011 Chart preserving in xlsx. Now xlsx charts are preserved and updated when you insert or copy rows, copied when you copy ranges, etc. Charts aren't converted between xls and xlsx file formats, so in order to use xlsx charts you need to start from an xlsx file. Support for reading Excel 5 and 95 xls files. While Excel 95 is not in mainstream use, many third party libraries produce xls 95 files today, and now you can read those files directly with FlexCel. After opening them, they must be saved as xls 97 or xlsx. Support for accessing nested properties when using LINQ in reports. Now when using LINQ in reports you can access nested properties from the template. If for example you have a class "Orders" and this class has a nested class "Customer", you can write in the template "<#Orders.Customer.Name>". You can use as many nesting dots as needed, as long as the properties have a single value. Support for calculating circular references. Now FlexCel can calculate iterative workbooks. The new properties "OptionsRecalcCircularReferences", "OptionsRecalcMaxIterations" and "OptionsRecalcMaxChange" in XlsFile allow you to control the iterative recalculation. As always, the APIMate tool will show how to set those properties in a sheet. APIMate improvements. APIMate will now show the schema of the fonts when using themed fonts (Excel 2007). Bug fix. There could be an error when manually copying formulas that included named ranges from a workbook to another. Bug fixes. <#delete range> tags could work wrong in nested reports. Xlsx files could be invalid after copying cells from other workbook. Better compatibility with xls files generated by other 3rd party tools. Bug fix. .NET 3.5 XMLReader can hang when reading some malformed xlsx files. Now FlexCel will throw and Exception when reading those files in .NET 3.5. Note that .NET 4 already worked fine and keeps doing so. Bug fix. Xlsx files could save the same image more than once when used in many places. Now only one copy of every unique image is stored. Bug fix. Xlsx files without printer information could default to landscape instead of portrait. Bug fix. Problem when saving xlsx files with more than one pivot table in different sheets. Performance enhancements. When reading 2007/2010 xls files with thousands of manual styles. Performance enhancements. Improved xls saving performance. New on v 5.5.1.0 - August 2011 Bug fixes. Manual page breaks in xlsx could be ignored by Excel. Improved compatibility with invalid xls files. Performance improvements. Exporting to html/mhtml now is faster for some files. Support for rendering non contiguous print areas. Now when exporting to Pdf/Html/Images/etc. FlexCel will honor print areas that have many different sections, like "=Sheet1!$A$1:$B$4,Sheet1!$D$5:$F$7". Bug fix. In some cases, after copying a chart from one sheet to another, you wouldn't be able to select the chart anymore from FlexCel to continue working with it. Performance improvements. Exporting to pdf now is faster for some particular files. A new property "IgnoreFormulaText" in XlsFile allows you to ignore the formula text when reading the cells in a file, speeding up the reading. Look at the "Performance" pdf for more information. Bug fix. Malformed hyperlinks now can be read. FlexCel can now read xlsx files with malformed hyperlinks. Added a new ErrorAction TExcelFileErrorActions.OnXlsxMissingPart. A new ErrorAction member has been added to allow reading corrupted xlsx files that lack some parts. This is off by default, you need to explicitly call XlsFile.ErrorActions &= !TExcelFileErrorActions.OnXlsxMissingPart for it to work. As with all other errors, when it happens it will be logged to FlexCelTrace. Bug fixes. Fixed problem that could rarely happen with nested relationships inside ADO.NET tables in a report. Bug fixes. Support for reading [this row] tokens in formulas inside Excel 2007 tables. We don't still process them and they will be imported s #REF!, but at least you can read the file. Bug fixes. Issue when copying data validations and conditional formats by columns. Bug fixes. When sorting a cell range formulas referring to that range could be offset by one. New on v 5.5.0.0 - May 2011 Breaking Change: Native support for LINQ and Entity Framework as data sources for reports. Now you can use any IQueryable iterator to create a report, besides DataSets. Breaking Change: VirtualDataTable and VirtualDataTableState objects had to be modified to allow the best performance when using IQueryable. If you have defined your own VirtualDataTable/State descendants, you will have to make some changes in the code to compile. New "Virtual Mode" for reading xls and xlsx files on demand. The new "Virtual Mode" allows you to read huge xls/xlsx files on demand, without loading the full file into memory. If you are importing big files with FlexCel, this new mode can make a big difference. Look at the "Virtual Mode" demo for more information. Support for reading and writing encrypted xlsx files. Reading and writing encrypted xlsx files is fully supported, both Excel 2007 (Standard Encryption) and Excel 2010 (Agile Encryption) Support for protected xlsx files. Full support for protected xlsx workbooks and sheets, including password protection. Pivot Table preservation in xlsx files. Pivot tables are now preserved when saving xlsx files, and they can be used in reports, and copied between sheets or files. Macro preservation in xlsx files. Macros are now preserved when when opening xls files and saving as xls or xlsx, or when opening xlsx files and saving as xlsx. Full support for R1C1 formulas. Now you can use the R1C1 notation besides A1 when entering or reading formulas. You can change the cell reference mode with the property XlsFile.FormulaReferenceStyle. Full support for all objects in the Forms palette (radio buttons/group boxes/comboboxes/listboxes/spins/labels/scrollbars/buttons) in the API, rendering, and xlsx. The new methods allow you to add and modify any object in the forms toolbar. Also now all those objects will be printed/exported to pdf/html/images, and they are fully supported in xlsx too. Support for buttons/checkboxes/readio buttons/group boxes/comboboxes/listboxes/spins/labels/scrollbars in APIMate. Now APIMate will show how to add/modify checkboxes and all objects in the forms palette. Support for Autoshapes in xlsx. Now autoshapes in xlsx are preserved, converted between xls and xlsx, and rendered. Header and footer images support in xlsx files. Images in headers and footers are now fully supported in xlsx. Exporting named ranges to html. A new property "ExportNamedRanges" in FlexCelHTMLExport allows you to export the names in the sheet as span ids that you can use later to modify those cells with javascript. A new event, "NamedRangeExport" allows you to customize how those names are exported. XlsFile now implements IEnumerable. You can now loop though the cells in an Excel file with a foreach loop. New overload of XlsFile. NewFile allows to specify the version of Excel used to create the file. Now you can specify the version of the blank xls or xlsx files created by FlexCel. Different versions fo Excel have different default fornts, columns widths, etc, and now you can specify exactly which version you are creating. This is specially useful for APIMate, since in older FlexCel versions it would always create a 2003 xls file, and modify it with code to match the newer versions. Now it creates the correct file, and code needed is much less. Fixed length text exporting now exports merged cells and cells that span to the right. When exporting Excel files to fixed length text, now merged cells and cells spanning to the right will use the full length available instead of cutting at the cell end. Breaking Change: Deprecated Get/SetCheckboxLinkedCell methods in the API. While those methods will still work, we have introduced a generic Get/SetObjectLinkedCell method that should be used instead. Bug fixes and performance optimizations. Xlsx files now load and save much faster, and performance was improved also for xls files. Pdf files are faster too. A new document in how to get the best performance in FlexCel is also included. Experimental MonoTouch support. FlexCel can now be compiled for MonoTouch. With it, you can read, write and recalculate xls files in iPhone apps. Note that this support is basic and not fully tested, even when it looks to be working fine. Xlsx file format and rendering (exporting to pdf/html/printing) are not supported. Search for "MonoTouch.sln" in the distribution. New on v 5.3.0.0 - August 2010 Support for Recalculation of 49 built in functions new to Excel 2007. Includes support for: AverageIf, AverageIfs, Bin2Dec, Bin2Hex, Bin2Oct, Convert, CountIfs, CoupDayBs, CoupDays, CoupDaysNc, CoupNcd, CoupNum, CoupPcd, Dec2Bin, Dec2Hex, Dec2Oct, Delta, DollarDe, DollarFr, Duration, EDate, Effect, EoMonth, FactDouble, Gcd, GeStep, Hex2Bin, Hex2Dec, Hex2Oct, IfError, IsEven, IsOdd, Lcm, MDuration, MRound, MultiNomial, NetworkDays, Nominal, Oct2Bin, Oct2Dec, Oct2Hex, Quotient, RandBetween, SeriesSum, SqrtPi, SumIfs, WeekNumn, WorkDay, YearFrac. Some of the functions are new to Excel 2007 (like AverageIf), and others were previously available in Add-ins. Look at SupportedFunctions.xls in documentation for more details. Support for Recalculation of 8 built in functions new to Excel 2010. Includes support for: NETWORKDAYS.INTL, WORKDAY.INTL, AGGREGATE, CEILING.PRECISE, ISO.CEILING, FLOOR.PRECISE, PERCENTILE.EXC, QUARTILE.EXC Look at SupportedFunctions.xls in documentation for more details. Support for all Excel 2010's "Renamed Functions". Now you can enter any of the Excel 2010 renamed functions in FlexCel, and those functions whose previous name was previously recalculated in FlexCel (and have the same paramters) will also recalculate now in FlexCel. Renamed functions: BETA.DIST, BETA.INV, BINOM.DIST, BINOM.INV, CHISQ.DIST.RT, CHISQ.INV.RT, CHISQ.TEST, CONFIDENCE.NORM, COVARIANCE.P, EXPON.DIST, F.DIST.RT, F.INV.RT, F.TEST, GAMMA.DIST, GAMMA.INV, HYPGEOM.DIST, LOGNORM.DIST, LOGNORM.INV, MODE.SNGL, NEGBINOM.DIST, NORM.DIST, NORM.INV, NORM.S.DIST, NORM.S.INV, PERCENTILE.INC PERCENTRANK.INC, POISSON.DIST, QUARTILE.INC, RANK.EQ, STDEV.P, STDEV.ST.DIST.2T, T.DIST.RT, T.INV.2T, T.TEST, VAR.P, VAR.S, WEIBULL.DIST, Z.TEST Look at SupportedFunctions.xls in documentation for more details. New "BALANCED COLUMNS" mode for reports. Now you can do parallel column reports where all columns stay balanced and cells are automatically added to pad the columns with less records. Look at the new "Balanced Columns" demo for an example in how to use it. New FIXEDN ranges for reports. FixedN ranges will behave as "FIXED" ranges for the first n records, and then behave as normal "__" ranges. For example the name "__db__FIXED2" will overwrite the 2 first records in the template, and then insert the rest. Look at the new "Balanced Columns" demo for an example in how to use it. New ROWS function for reports. Allows to create datasources in the fly from the template with a defined number of rows. Look at the new "Balanced Columns" demo for an example in how to use it. New TCopyRangeMode.Formats to copy formats from a block of cells to another. Now you can call InsertAndCopyRange with TRangeCopyMode.Formats to copy the cell formats from one place to another. Improved Medium trust support. Our obfuscation tool was having issues when running in Medium Trust, now it should be fixed. This allows for deployment in shared hosting like godaddy. Many small fixes and enhancements. As always, a lot of small fixes and improvements have been done. New on v 5.2.0.0 - April 2010 Breaking Change: Support for .NET 4.0. Includes support for the new 4.0 security model. Breaking Change: Deprecated support for .NET 1.1. In order to move faster to the new technologies, we had to deprecate .NET 1.1 support for this version. Full Comment support in xlsx. Now comments are fully supported in xlsx besides xls. You can also set extended properties like the comment color directly from the API. Full Data Validation support in xlsx. Now data validation is fully supported in xlsx besides xls. Full Hyperlink support in xlsx. Now Hyperlinks are fully supported in xlsx besides xls. Full Checkbox support in the API, rendering, and xlsx. The new methods: Get/SetCheckboxState, Get/SetCheckboxLinkedCell and AddCheckbox allow you to add and modify checkboxes states and linked cells. Also now checkboxes will be printed/exported to pdf/html/images, and they are fully supported in xlsx too. New "DateFormats" parameter supported when opening or importing CSV files, and also when setting cells from string. This parameter allows you to specify only a subset of supported datetime formats when importing, to ensure .NET won't interpret invalid strings as dates. For example, calling: xls.Open("test.csv", TFileFormats.Text, ';', 1, 1, null, new string[] { "d/M/yyyy", "hh:mm" }, Encoding.Default, true); will only import dates in format "d/m/yyyy" or times in format "hh:mm". New "FirstSheetVisible" property in XlsFile. This property controls what is the first sheet tab that is shown in the sheet bar at the bottom of Excel. New "CenteredPreview" property in FlexCelPreview. When true, previews will render centered in the window, as they do in Excel. Added support for new functions in recalculation. Added support for FREQUENCY. Performance Improvements and bug fixes. The move away from .NET 1.1 allowed us to switch to generics much more of the code, with up to 10% speed up. Together with other performance improvements, 5.2 can be up to 30% faster in some cases. Database in all demos migrated from Access to SQL Server compact. As Microsoft still doesn't support the JET driver in 64 bits, we changed the demos to use SQL Server Compact Edition instead. This way you will be able to test the database demos in pure 64 bits. Tested against Office 2010 RTM. Generated files have been tested against the release version of Office 2010. New on v 5.1.0.0 - January 2010 Excel 2010 XLS file support. Excel 2010 introduced a new "Protected View" that will flag old FlexCel xls files as "unsafe". This release fixes this so files will open in 2010 without warnings. It is important to update to this version as soon as possible, so when Excel 2010 is released your files will keep on working without warnings. BASIC IMAGE SUPPORT IN XLSX. Simple images are now fully supported in xlsx besides xls. They will be converted and preserved when you open an xls file and save as xlsx or viceversa, and also rendered to pdf, etc. Grouped images and autoshapes are still not supported in xlsx, but coming soon. THEME SUPPORT. Now the rendering engine will use other themes besides the standard office theme, and you are also able to modify the themes in a sheet. BETTER INDEXED COLOR SUPPORT. The new method "OptimizeColorPalette" will modify the Excel 97-2003 color palette in an xls file so it includes the colors used in the sheet. Excel 2007 or newer don't need this as they support RGB colors. MISC IMPROVEMENTS IN XLSX FILE SUPPORT. Support for autofilters, selections, printer driver settings, showing gridlines/headers and many small window properties when loading or saving xlsx. Macros are preserved when reading an xls file and saving as xlsx. IMPROVED MEDIUM-TRUST SUPPORT. Improved fallback in Exceptions when running in Medium Trust. As before, FlexCel can be compiled with "FULLYMANAGED" conditional define to not only be 100% safe but also 100% managed code. But now even the dlls compiled without "FULLYMANAGED" will work fine in medium trust. NEW METHODS IN XLSFILE FOR EXPORTING AND IMPORTING FROM TEXT FILES. The new methods XlsFile.Import and XlsFile.Export provide more flexibility when working with text files than the existing XlsFile.Open/XlsFile.Save methods. Now you can specify a "fixed length" file besides a text delimited file, and you can also import a text file in the middle of an existing file. NEW COPY MODE ALLOWS TO COPY OBJECTS MARKED AS "DON'T COPY" when copying ranges or sheets. TRangeCopyMode.AllIncludingDontMoveAndSizeObjects will copy everything when used in InserAndCopyRange. InserAndCopySheets will use this mode now by default. NEW METHOD GETUSEDNAMEDRANGES IN THE API. Returns which ranges are being used in formulas inside the sheet and which aren't. NEW METHOD CELLRANGEDIMENSIONS IN THE API. Returns the dimensions a range of cells would use when rendered. Can be used when rendering to a bitmap to calculate the size of the bitmap what will hold the cells. TOOLS ARE NOW PRECOMPILED WITH .NET 3.5. The tools like ApiMate and FlexCelDesigner used to come precompiled with .NET 1.1, so you can use them no matter which .NET version you have in your development machine. But as .NET 1.1 doesn't support xlsx, now they come with 3.5. BUG FIXES. Small fixes and improvements. New on v 5.0.1.0 - October 2009 IMPROVED PERFORMANCE INSERTING ROWS WITH MANY IMAGES. This will also speed up reports with lots of images too. NEW FUNCTION SUPPORT FOR RECALCULATION. PercentRank is implemented now too. IMPROVED SUPPORT FOR OTHER THIRD PARTY EXCEL GENERATED FILES. While we support virtually every xls file Excel generates (and we are not aware of any file we can't retrieve if it has been saved with Excel and it is in xls 97 or up), some third party apps create files with wrong information, that rely in bugs (and sometimes even in buffer overflows) in Excel to work. In this release we implemented support for many of those files, including files generated by SAP BUG FIXES. This is primary a maineneance release, and there are many bug fixes and small improvements, mainly in the Excel 2007 support (both xlsx and "Excel 2007 xls") It is recommended that you update from 5.0. New on v 5.0.0.0 - October 2009 EXCEL 2007/2010 SUPPORT. Included support for new features in Excel 2007/2010: •Basic support for reading and writing xlsx file format. Note that due to framework limitations, you need .NET 3.5 for xlsx support. •Expanded the rows to 1048576 and the columns to 16384. A compatibility mode still lets you work with the smaller grid should you need to do so. •Support for Excel's 2007 true color and themes. Breaking change: ColorIndex properties don't exist anymore and now are just Color. You can still access the color indexes with Color.Index. •Support for gradients in cell backgrounds; to get/set them or to export them to pdf/images/print. •Support for a different header and footer for the first page and for even pages; to get/set them or to export them to pdf/images/print. •Support for comments in named ranges. •Cell indentation can go up to 250 characters instead of the old 15. •Methods OptionsMultithreadRecalc, OptionsForceFullRecalc, OptionsAutoCompressPictures, OptionsBackup, OptionsCheckCompatibility in XlsFile class allow to configure the corresponding settings in an Excel file.. Take a look at the new section "Considerations about Excel 2007 support" in the API Guide for more information about updating to xlsx support. NEW HTML 3.2 SIMPLE EXPORTING MODE. This new exporting mode won't use CSS or floating images, and most settings will be done through simple tags. While some style tags are still used when there is no other option, they are mostly not used either. This mode isn't as faithful reproducing Excel files as the existing ones, and it doesn't validate either, but it can be very useful when you need simple HTML more than exact representation of the xls file. It can be used with devices or browsers that don't support CSS, or in places where you can't change the existing CSS definitions (for example if you are adding a table in a blog post, where you can't change the page headers to include other CSS file). WHAT-IF TABLES. Now FlexCel can recalculate What-if tables, and you can add or read the What-if tables in a file. APIMate also supports What-if tables now, and will show the syntax to create them. ADDED RECALCCELL METHOD TO THE API. This method allows you to calculate only a cell and its dependencies, not the whole workbook. It can be useful if you are using FlexCel as a calculator and making thousands of recalculations where you are only interested in the value of one cell. ADDED RECALCEXPRESSION METHOD TO THE API. With this method you can calculate any formula that is not in the file. For example, if you want to know the sum of the cells in column a of a worksheet, you can use xls.RecalcExpression("=sum(A:A)"). SUPPORT FOR ENTERING MULTICELL FORMULAS WITH THE API. Now you can not only enter array formulas with the API as you could before, but also enter array formulas that span over more than one cell. We only added the ability to add them from the API, FlexCel was already fully aware of multicell array formulas and could recalculate them too. ApiMate will show you the syntax to enter them. IMPROVED SUPPORT FOR DATE AXIS IN CHARTS. Now date axis in charts behave exactly the same way they do in Excel. IMPROVED SUPPORT FOR NUMERIC FORMATS. Now "*", "_" and "?" characters in format strings are fully supported when rendering files, and will show exactly as they do in Excel. IMPROVED RENDERING. Support for diagonal borders. A new "Linespace" property in XlsFile object allows you to fine tune the linespace between 2 lines in multiline cells. IMPROVED COLOR MATCHING ALGORITHM FOR CONVERTING TO INDEXED COLORS. Now NearestColorIndex will use the Euclidean distance in L*a*b* color space instead or RGB, for improved color matching. NEW SAVEFORHASHING METHOD. This method will save the file in a file format that will remain the same if the file didn't change, ignoring the timestamps present in the xls file format. So you can hash this value and use the hash to compare it to a new file, and know if something changed. Cell selections and sheet selections are not saved by default, but they can be included. ADDED ABILITY TO READ AND WRITE "SHARED WORKBOOK" PROTECTION OPTIONS. Now you can change the shared workbook protections options in any file, xls or xlsx. IMPROVED PERFORMANCE. FlexCel 5 has been a big rewrite that allowed use to tweak many places for even better performance. New on v 4.9.6.2 IMPROVED AUTOFIT OF MERGED CELLS. Now when autofitting rows and a merged cell has more than one row, you can select which one of the rows from the merged cell will be updated. Same for autofitting columns and merged cells with more than one column. This applies to both reports and API. See "Autofitting Merged Cells" section in the API Guide for more information. SYNTAX HIGHLIGHT WHEN DEBUGGING REPORTS. Now when in debug mode, strings will be maroon, booleans blue and errors red. New on v 4.9.6.0 - November 2008 DELPHI PRISM SUPPORT. All demos have been converted to Delphi Prism, installation now installs into Delphi prism, and APIMate can generate Delphi Prism code. ABILITY TO MODIFY CHART SERIES FROM THE API. Now you can directly modify chart series from the API. FULL SUPPORT FOR WORKING WITH NAMED STYLES FROM THE API. Now you can create, modify or remove named styles in the Excel file. Also apply styles or find our which styles are applied to a cell. TTC FONT SUPPORT WHEN EXPORTING TO PDF. Now TTC (True Type Collection) fonts are fully supported when exporting to PDF. This includes subsetting. NEW SETEXPRESSION METHOD IN FLEXCEL REPORT. Now you can use the SetExpression method in FlexCelReport to dynamically add formulas to a report. For example, you might have an edit box where the user enters an expression like "<#evaluate(<#Order.Amount> * <#Order.Vat>)>", and this expression will be used in the final report. With this method you can reuse the same template to evaluate different formulas. IMPROVED HTML RENDERING. Fixed small browser incompatibilities. Chart sheets now are exported too. Now FlexCelViewer renders by default in XHTML 1.1, to be compatible with the designer. IMPROVED SPEED RENDERING CONDITIONAL FORMATS. Applies if you have thousands of conditional formats defined in a sheet, speed of rendering will be much faster. IMPROVED SPEED IN FORMULA RECALCULATION. Now formula recalc is faster if you are using full sheet ranges (like A:IV). SMALL BUG FIXES. Autofilters now are updated when inserting or deleting columns. New on v 4.9.5.0 FONT SUBSTITUTION IN PDF. A new Property "FallbackFonts" in FlexCelPdfExport allows you to specify a list of "Fallback" fonts that FlexCel will use when the character to print is not in the main font. See "Dealing with missing fonts and glyphs" in UsingFlexCelPdfExport.pdf for more information. NEW DBVALUE TAG FOR REPORTS. Allows you to know the value of any record of a data table, for example to merge similar cells. See the new "Merging Similar Cells" demo. IMPROVED NAMED RANGE SUPPORT. New methods DeleteNamedRange and ConvertExternalNamesToRefErrors will let you delete a range or convert all ranges with external references in a file to #REF! errors. NEW TRYTOCONVERTSTRINGS PROPERTY IN FLEXCELREPORT. When your data is stored as strings in your database, this property will make FlexCel enter the correct datatype for the contents into the cells. New on v 4.9.2.0 - June 2008 NEW LIST TAG FOR REPORTS. The <#List()> tag allows to agregate a dataset into a list that can be dropped into a single cell, or also to use other tags without being inside a named range. NEW SEMIABSOLUTEREFERENCES properties in the API and in Reports. Now you can control how to change absolute references in formulas referring to cells inside the block being copied. For example in Excel, if you have Cell A1: 1, Cell B1: =$A$1, and copy the row down, the new row will be Cell A2: 1, Cell B2: =$A$1. If you set this new property to true, Cell B2 will be =$A$2, since A2 is inside the block being copied. You can use this on the API when copying blocks with absolute references you would like to change, or in multi master detail reports, to ensure absolute references point to the right place. SHEET TAB COLOR SUPPORT. Now you can read or set the color of a sheet tab using the new SheetTabColorIndex property in XlsFile. (This feature is supported in Excel XP or newer) SHEET TAB COLOR EXPORTED TO HTML. Now by default if a sheet has a tab color, it will be shown in the resulting HTML file. You can change this by setting the new "UseSheetTabColors" property in the StandardSheetSelector class to false. BUG FIXES. Fixed an error when subsetting complex true type fonts when exporting to pdf. Fixed an error when entering bmp image in compact framework. New on v 4.9.1.0 Breaking Change: SUPPORT FOR EMBEDDING FONT SUBSETS IN PDF. Now FlexCel can embed only the subset of characters being used from a font into a PDF file, allowing smaller PDF files when embedding unicode fonts. NOTE: This is a BREAKING change, since font subsetting is enabled now by default. If you want to keep the old behavior (for example to have editable PDF files) you need to set FontSubset property to DontSubset in the PDF export components. GLOBAL ERROR HANDLER FOR NON-FATAL ERRORS. There is a new FlexCelTrace global class where you can hook a listener to get notified of all non-fatal errors while working with FlexCel. You can use it for example to know when a font is not installed in the system and is being replaced by other in a pdf file, or when a character is not present in a font and so it will show as a blank square. See the new "Error Handling" demo and the PDF documentation in the PDF export guide. ADDED TOP(N) FILTER FOR REPORTS. With this new filter you can get the top n items from a table directly from the template without touching the code. See the modified "Fixed Forms With Datasets" demo. IMPROVED HTML RENDERING. Improved how exported HTML files are generated. IMPROVED PDF EXPORT. Added a new event allowing to control whether to embed or not and individual font. See the modified "Export PDF" demo. BUG FIXES. Fixed bug with some functions when recalculating linked files. Fixed overflow exception when creating charts with very large values. New on v 4.9.0.0 RECALCULATION OF LINKED FILES. Now FlexCel can recalculate across linked files, even files with circular links. See the new section about Workspaces in the PDF API Guide. AGGREGATE SUPPORT IN REPORTS. The new tag "Aggregate" allows to sum, average or find the minimum or maximum value in a dataset from the template. You can use it when you can't modify the data layer. See the new "Aggregates" demo. SUPPORT FOR HTML TAGS WHEN REPLACING TEXT IN AUTOSHAPES. Now you can use html inside autoshapes as you could use inside normal cells. IMPROVED RENDERING. Fixed small issues when rendering Excel spreadsheets. New on v 4.8.0.1 IMPROVED RECALCULATION. Fixed a bug that might cause a file not to be recalculated in the second time you call recalc with complex files. Small performance improvements. IMPROVED FORMAT DISPLAY. Added support for [mm], [hh] and [ss] format specifiers for elapsed time. IMPROVED MONO COMPATIBILITY. Changed internal compression routines in pdf so they work under mono New on v 4.8.0.0 PDF SIGNING. Now you can digitally sign the generated pdfs, with both a visible or non visible signature. NEW APIMATE TOOL. This new tool can convert an Excel file to code, so you can see how to call the FlexCel APIs. Code can be generated in C#, VB.NET or Delphi.NET. A flash demo showing how to use it is available at IMPROVED HTML GENERATION. Includes the ability to export headers and footers as blocks above and below the spreadsheet, and fixes to workaround internet explorer bugs. Exporting headers and footers to HTML is off by default, but you can turn it on. (Look at the Export to HTML demo) BUG FIXES. Small fix with formulas in Data Validation, and support for [>n] tags in numeric formatting expressions. New on v 4.7.0.1 - FLEXCEL DESIGNER BUG FIX. FlexCel designer could raise an Exception when started. New on v 4.7.0.0 INTELLIGENT PAGE BREAKS. Even when there is no direct support for widow/orphan lines in Excel, FlexCel now provides a way to keep rows and columns together avoiding page breaks in the middle of important data. You tell FlexCel which rows you want to keep together, and it will automatically add page breaks at the needed points in the file so it prints as you want to. This new feature can be used both from the API or from the reports. For more information, look at the "Intelligent Page Breaks" demos in the Report and API sections. BETTER ERROR HANDLING OF PAGE BREAK ERRORS. In previous FlexCel versions you could choose whether to raise an Exception or silently ignore errors when trying to insert more than the maximum allowed number of manual page breaks (1026). In this version you can insert as many Page Breaks as you want, and the error or silent ignore will be done at save time. This allows to have more than 1026 manual page breaks when exporting to PDF without saving as xls. <#DEFINED FORMAT> TAG FOR REPORTS. Allows to know if a user defined format is defined or not. Look at the Intelligent Page Breaks demo in the report section. NEW <#IMGPOS> AND <#IMGFIT> <#IMGDELETE> TAGS FOR REPORTS. You can use ImgPos to center or align an image dynamically inside a cell. ImgFit will resize the rows and columns below the image so the image is fit in one cell, and ImgDelete will delete an image. Take a look at the modified Images demo or at the new Features Page demo. IMPROVED AUTOFIT IN API AND <#AUTOFIT> TAG FOR REPORTS. Now you can Autofit rows and columns setting a maximum and a minimum height/width for the autofit. IMPROVED <#IMGSIZE> TAG FOR REPORTS. Now the ImageSize tag can do a "Best Fit" resize. You define the maximum size of the image in the template, and ImageSize will resize the image so it is as big as possible keeping the aspect ratio and inside the bounds you select. Look at the modified Images demo. IMPROVED <#FORMAT RANGE>, <#DELETE RANGE> AND <#MERGE RANGE> TAGS FOR REPORTS. Now you can use named ranges instead of strings like "a1:a2" to define the ranges to format, merge or delete. It is recommended that you use this new way, so the ranges will adapt when you insert rows in your template. (A tag <#FORMAT RANGE(a1:a3;blue)> will not change to A2:A4 if you insert a new row at A1. A tag using a named range will.) NEW IMAGEBACKGROUND PROPERTY WHEN EXPORTING TO HTML. Allows you to define a background color like white for images in html files, so they are not transparent and they show fine in Internet Explorer 6 without needing to use the FixIE6TransparentPngSupport to true or using gif images instead of png. BUG FIXES. Small bug fix for border cases when inserting columns. AND function now can AND over a range of cells. Report Expressions now can use named ranges. New on v 4.6.0.0 SUPPORT FOR VISUAL STUDIO 2008 AND .NET 3.5 FRAMEWORK. Also updated Setup with the option for installing into VS2008. NATIVE DEMOS FOR VISUAL BASIC.NET. All more than 50 demos have been converted to Visual Basic .NET, allowing for easier study for vb users. SUPPORT FOR CUSTOM EXCEL FORMULA FUNCTIONS. Now you can define your own classes that implement Excel custom formula functions (like for example the ones in the Analisis Toolpack Addin, or any function you define using a macro). You can read formulas using those functions from Excel, write them or calculate them. See the "Custom Excel Formula Functions" demo for more information. SUPPORT FOR AUTOFILTERS IN API. Now you can read or write autofilters in a sheet using the API. IMPROVED COPYING FROM ONE FILE TO ANOTHER. Now when copying between files Charts will be copied too, and all external references will be converted to the new file. Also Autofilters will be copied when there are no autofilters in the destination sheet. IMPROVED GENERIC REPORTS. Now you can write expressions in a cell with a <#DataSet.*> tag, leading to much more powerful generic reports. Also now if the cell with the <#DataSet.**> tag has an autofilter, the autofilter will be propagated to the following columns. See the improved Generic Reports demo for more information. MORE CUSTOMIZATION IN THE <#INCLUDE> TAG. Now you can include files in your reports without running a report on the include, and also specify if you want to copy the column widths and row heights from the included report into the parent. See the documentation in the Include tag. BUG FIXES. Small bug fixes and optimizations. New on v 4.5.1.0 - October 2007 DEBUG MODE IN FLEXCELREPORT. A new property in FlexCelReport, DebugExpressions, allows to output the full stack trace of the report tags to the cells where they are, instead of the tag value. Other property, ErrorsInResultFile allows to log the error messages to the generated file instead of raising exceptions. See the new Debugging Reports demo and the Debugging Reports section in the pdf user guide for more information. DATA VALIDATION SUPPORT. Added methods to add, delete, change or get information about the Data Validation of a cell. IMPROVED OUTLINE SUPPORT. Added methods to collapse or expand the outlines in a sheet to a specified level, to collapse and expand individual nodes, or to find out if a row or column is an outline node (contains a "+" sign). IMPROVED XLS COMPATIBILITY WITH EXCEL 2007. Some really complex image manipulations could cause Excel 2007 to fail to load the generated xls files. It has been fixed now. IMPROVED COMPATIBILITY WITH MONO. Tested with .NET 2.0 implementation of Mono and workarounded MONO issues. NEW OBJECT EXPLORER AND ADVANCED API DEMOS. The first shows how objects in a sheet are nested, and the second shows how to use different methods in the API. New on v 4.5.0.0 - September 2007 HTML EXPORTING ENGINE. A new component TFlexCelHtmlExport can export Excel files to html, in HTML 4.01 strict or XTHML 1.1 and fully standards compliant, with the quality you have come to expect from us. Most things, like images, charts, merged cells, conditional formats, wordart, etc. are exported. Multiple sheets can be exported in tabs or as a single file. Support for ie 6/7, Firefox, Opera and Safari. FLEXCELASPVIEWER. Allows viewing Excel files as html directly from any ASP.NET application. Just drop the component in a WebForm, and assign it to an xls file. (only ASP.NET 2.0 supported) META TEMPLATES. The new <#PREPROCESS> tag allows a template to modify itself before creating a report. You can now for example create reports that will automatically delete a column from the report template if the dataset does not have the field. See the "Meta Templates" demo for more information. PARTIAL FORMAT DEFINITIONS IN REPORTS. Now you can define formats in the config sheet that will apply only a part of the cell format. For example, if you name a format "Header(background)" it will only apply the background of the cell and not all the other properties. See EndUserGuide.pdf for more information, and the Multiple Sheet Report demo for an example. API IMPROVEMENTS. New method XlsFile.RenderObject allows exporting any image/chart/autoshape into an image. New method XlsFile.RenderCells allows exporting a range of cells (without objects) into an image. You can still export full spreadsheets to images with FlexCelImgExport, but these methods provide lower level access. FULL TEXT SEARCH IN DEMOS. Now you can easily find the demo that shows a feature you are interested in by typing in the Search box in MainDemo. BUG FIXES AND SPEED IMPROVEMENTS. Small fixes in rendering and overall speed enhancements. New on v 4.0.0.3 - May 2007 - BUG FIXES. Locales like Turkish where I (uppercase) is not the same as i (lowercase) had problems. When printing/exporting to pdf more than one sheet, sometimes the "print to fit" size for the second sheet could be calculated wrong. Rotated transparent shapes exported to pdf could be exported wrong. New on v 4.0.0.2 - April 2007 IMPROVED CSV IMPORT/EXPORT. Now supporting Locales. BUG FIXES. Error in border cases when adapting formulas after inserting rows in different sheets. New on v 4.0.0.0 - December 2006 CHART RENDERING. Now most 2d charts are printed and exported as images or pdf. 3d charts are exported as their 2d equivalents. Bubble, Surface and Radar charts are not exported. NEW HTML CAPABILITIES. Now you can directly enter HTML formatted strings into an Excel cell, using TRichString.FromHtml(). Also you can convert the rich text in an Excel cell into an HTML string. A new property "HtmlMode" on FlexCelReport allows you to do reports from HTML data. Also the new <#HTML> tag allows to select which cells you want to use html and which ones you don't no matter the "HtmlMode" value. See the "HTML Reports" demo for more information. SUPPORT FOR WRITING PXL 2.0(POCKET EXCEL) FILES. Now you can not only read but also create native Pxl 2.0 files. READING DOCUMENT PROPERTIES. Now you can read the Author, Title, etc. of any xls document. IMPROVED FLEXCELIMAGEEXPORT. The new method "SaveAsImage" allows creating multipage tiffs, fax, png, gif and jpg images of any xls file with just a method call. See the "PRINT PREVIEW AND EXPORT" demo for more details. IMPROVED PRINTING AND EXPORTING. New properties AllVisibleSheets and ResetPageNumberOnEachSheet on FlexCelImgExport and FlexCelPrintDocument, allow printing all sheets on a workbook, keeping the "page n /m" headers or footers correlative. A method on FlexCelPdfExport allows the same thing. See the exporting demos. AUTOFITTING SUPPORT. Now you can autofit rows or columns with XlsFile.AutoFitRow, XlsFile.AutoFitCol and XlsFile.AutoFitWorkbook methods. On reports, the new tags <#Row Height> and <#Column Width> allow to change the row height / column width in a report, and to hide, show or autofit columns and rows. See the new Autofitting demo on the reports section. VIRTUAL DATASETS. Now you can use any data you like as source for your reports, not only Datasets. See the new Virtual DataSets demo. SUPPORT FOR MANUAL FORMULAS IN REPORTS. New report tags <#Formula> and <#Ref> allow replacing tags inside formulas, creating customizable formulas depending on the report data. (See the new "Manual Formulas" demo). NEW ENTERFORMULAS PROPERTY ON FLEXCELREPORT. Allows to enter any text starting with "=" as a formula instead of text. FIXED BANDS ON REPORTS. Now, by defining "__band__FIXED" ranges you can have bands that don't insert cells when moving down. See Fixed Forms With Datasets demo. IMPROVED TAG REPLACE. Report Tags are replaced now also on WordArt objects and Screen tips inside Hyperlinks. SPLIT TAG ON REPORTS. Allows splitting a datatable every n rows. See Split demo for more information. USER TABLE TAG ON REPORTS. Allows defining the datasets you want to use directly on the template. See User Tables demo for more information. ADDED RECALCULATION FOR FUNCTIONS. DCount, DSum, DAverage, DMin, DMax, DProduct, DCountA, DGet, DVar, DVarP, DStDev, DStDevP, Large, Small, MinA, MaxA, Var, VarP, VarA, VarPA, WeekDay, Product, SumSq, CountBlank, Roman, AverageA, Days360, FV, PV, NPV, DB, DDB, Syd, Sln, PMT, IMPT, PPMT, NPer, NormDist, NormsDist, LogNormDist, NormInv, NormsInv, LogInv, ExponDist, Poisson, Binomdist, NegBinomDist, HypGeomDist, Standardize, GeoMean, HarMean, Rank, GammaDist, GammaInv, GammaLn, ChiDist, ChiInv, IRR, MIRR, Rate, Areas, Rows, Columns, SumX2mY2, SumX2pY2, SumXmY2, Transpose, MMult, ZTest, ChiTest, Weibull, Kurt, Skew, AveDev, DevSQ, Steyx, Rsq, Pearson, Slope, Fisher, FisherInv, Median, Quartile, Percentile, Mode, Intercept. 210 functions supported (see the new SupportedFunctions.xls spreadsheet). IMPROVED ARRAY FORMULA SUPPORT. Now you can enter array formulas with the API (for example "{=Average(if(a1:a3=3;1))}" ). And now FlexCel can calculate array formulas too, including array formulas that cover more than one cell. ADDED BOOKMARKS TO PDF. Now you can automatically add a bookmark on each sheet when exporting to pdf, or manually modify the bookmarks too. See the "Custom Preview" demo, on the button to export to pdf when "All Sheets" is selected. IMPROVED RENDERING. Added support to print and Export to PDF more than 70 Autoshapes: From block arrows to FlowCharts to basic shapes. See the new file SupportedAutoshapes.xls for more information. IMPROVED RENDERING. Now FlexCel can print and export to pdf basic WordArt text. Not all effects or types of WordArt are supported, but text is shown. IMPROVED RENDERING. Improved shadow support for autoshapes, and also gradient, texture, pattern and image fills supported. ADDED MOVERANGE TO THE API. Allows moving a range of cells in a sheet the same way Excel moves them, adapting all formula references as needed. The same as the existing InsertAndCopyRange and DeleteRange methods, this method is fully optimized to perform thousands of moves by second. ADDED Find, Replace and Sort METHODS TO THE API. While you could always do this by code, now it is easier to search inside, replace or sort a range. ADDED XlsFile.Protection.WriteAccess PROPERTY. Lets you know which user has a file opened in Excel. See the modified Getting started demo. NEW READING FILES DEMO. Showing how to import an Excel file into a DataSet /Datagrid. IMPROVED MONO PDF SUPPORT. Now FlexCel will try to automatically find the fonts when running on Linux. Also added a section on UsingFlexCelPdfExport.pdf explaining how to create pdf files from MONO. 1904 DATES SUPPORT. Full support for 1904 based dates, allowing interoperability with xls files created on Apple computers. PRECISION AS DISPLAYED SUPPORT. Now recalculation in files where "Precision as Displayed" is true will honor that setting. IMPROVED PERFORMANCE. Performance improvements in Reporting, rendering and the API. OPTIMIZED PDF FILE SIZE. Now Pdfs are smaller. IMPROVED HELP FILES. Help files are now created with SandCastle, and can be integrated inside the VS IDE. BUG FIXES. Small bug fixes. New on v 3.7.0.0 - December 2005 SUPPORT FOR READING PXL (POCKET EXCEL) FILES. Now you can read both Pxl 1.0 and 2.0 files. COMPACT FRAMEWORK 2.0 SUPPORT. A new project FlexCelCF20 is included for CF 2.0. A new project for .NET 20 (FLEXCEL20.csproj) is included too. DELPHI 2006 (.NET). Support and demos. SUPORT FOR RENDERING MOST USED AUTOSHAPES. Now rectangles, textboxes, ellipses, lines, triangles, arrows will be printed/previewed/exported to pdf. Support for semi-transparent fills, rich text inside, shadows, etc. NEW FUNCTIONS SUPPORTED FOR RECALCULATION. Cell, Lookup, Address, Fact, Combin, Permut, SinH, CosH, TanH, ASinH, ACosH, ATanH, Fixed, Dollar, Code, T, N, Hyperlink, StDev, StDevP, StDevA, StDevPA, Correl, Covar NEW CALCULATED COLUMNS ADDED TO REPORTS. Now you can use <#DataSet.#RowPos> and <#DataSet.#RowCount> inside reports to access the actual position and record count of a band. IMPROVEMENTS ON PDF API. Now you can write transparent text with the API, for example to superimpose a watermark to a FlexCel generated file. (See PDF Export demo) IMPROVEMENTS ON RENDERING. Watermark images on headers/footers now print right. IMPROVEMENTS ON FORMULA PARSING. There were some issues when entering complex formulas with the API, the formulas will be entered correctly but Excel would not calculate them. NEW UTILITY API FUNCTIONS. Added 2 overload to SetCellFormat allowing to set the format to a range of cells and change only one attribute on a range of cells (for example change only the line style, keeping the existing fonts) NEW FEATURED DEMO. FlexCel Image Explorer allows you to see and extract the images you have inside an Excel file. New on v 3.6.0.0 - August 2005 DIRECT SQL ON REPORTS. Now you can write sql against a connection on the server directly from the template. Note that for security reasons, if you don't add any connection to the report, no SQL can be executed. See "Direct SQL" demo for details. RELATIONSHIPS ON THE TEMPLATE. Now you can add data relationships directly on the template, allowing for example to "split" a dataset into master/detail and relate the 2 new datasets. See "Master Detail on one Table" demo for more information. DISTINCT FILTER. Now you can use a "DISTINCT() filter to filter unique values on a dataset. See "Master Detail on one Table" demo for more information. MERGE RANGE TAG. With it you can conditionally merge cells on a band. See "Master Detail on one Table" demo for more information. LOTS OF SMALL IMPROVEMENTS ON THE RENDERING ENGINE. Now Conditional formats are printed and exported to pdf, cell patterns are exported to pdf and print better, Printer hard margins are considered, Subscripts/Superscript show on text, grouped and arbitrary angle rotated images are shown, etc. PARAMETERS ON REPORT EXPRESSIONS. Now you can use parameters when defining report expressions. For more information, see Expression Parameters demo. IMPROVED CF SUPPORT. Better support for CF packages (Registered version only) NEW FEATURED DEMOS. Showing how to access a web service from FlexCel or how to export an AdvWebGrid. Modified the Print/Preview demo to show how to export Multipage tiff files. NEW DEMO ON HOW TO DIRECTLY OPEN THE GENERATED FILES. The "Getting Started" and "Getting Started Reports" demos have been modified to show how to directly open the generated files without asking the user to save the file. (note that for this to work, the user must have excel on his machine). FLEXCEL API. New methods FreezePanes and GetFrozenPanes to freeze/unfreeze or get information about frozen panes. New methods SplitWindow / GetSplitWindow to split a sheet. PERFORMANCE IMPROVEMENTS. Almost 30% faster on some tests. IMPROVED FRAMEWORK 2.0 SUPPORT. Now when FRAMEWORK20 symbol is defined the new compression classes on System.IO.Compression will be used. BUG FIX. A call to StringFormat.SetMeasurableRanges could cause a deadlock when running hundreds of threads at the same time. New on v 3.5.0.0 - May 2005 NATIVE PDF EXPORT. Now you can export your reports to pdf, all on 100% native and managed code, without needing to have Excel or Adobe Acrobat installed. OPTIMIZED PRINTING ENGINE. Much faster and with lots of new features. (like repeating rows/columns, brightness/contrast adjustments on images, printing column and row headers, printing different types of borders and much more) IMPROVED RECALCULATION. More than 100 functions supported, and now much more like Excel. New methods include: Find, Proper, Concatenate, Exact, Rept, Clean, Search, Substitute, Text, Index, Match, RoundUp, RoundDown, Even, Odd, Subtotal, CountA, Value, Sumproduct. Also now supported intersect and union of ranges while recalculating, and arrays as parameters of formulas. BASIC AUTOSHAPES SUPORT. Now you can retrieve all their values and change their text. FlexCelReport will replace text inside Autoshapes also. SUPPORT FOR READING/WRITING HEADER AND FOOTER IMAGES. Now you can access images on headers and footers, and they will be printed and exported to pdf too. Note that images on headers and footers are only supported on Excel XP and newer, older Excel versions will open the file but will not display the graphics. NEW COMPONENT TO PREVIEW WITH THUMBNAILS AND WITHOUT PRINTERS INSTALLED. NEW COMPONENT TO EXPORT TO IMAGES. See Custom preview demo for more information on both. CELL SELECTIONS. Now you can read and write cell selections on a file. A new property on FlexCelReport, "ResetCellSelections", allows you to reset all selections on all sheets to "A1", so you do not need to worry about selection positions when saving the template. SET NAMED RANGES. Now you can add or modify named ranges, including ones as the print area. OPTIMIZED FOR .NET 2.0. If you define the "FRAMEWORK20" conditional define, a lot of 2.0 ONLY features (like Generics) will be used in places where they can improve performance. REGULAR EXPRESSIONS ON REPORTS. You can use the new <#REGEX()> tag to perform regular expression replaces on the reports. See "Regular Expressions" demo for more information. COMPACT FRAMEWORK ASSEMBLIES. While you can still use the same dll for both compact and full framework (as before) now we include a special FlexCelCF solution that will create an assembly specifically targeted to CF. CODENAMES. Now you can read the codenames of the sheets. This is useful because codenames never change, while sheet names can be changed by the user. NEW METHODS. ConvertFormulasToValues allows you to quickly remove formulas form a sheet. RecalcAndVerify() allows to verify you are using supported functions on your template. (See Verify Recalc demo) HYPERLINK SUPPORT ENHANCED. Now empty hyperlinks on report will not show, and also the syntax for tags is changed to "*.tag.*" for Excel2003 compatibility. (old <.tags> still work, but it is recommended to use the new syntax for new development) New on v 3.1.0.1 - January 2005 FORMULA RECALCULATION. Now most used formulas are recalculated before saving the files. (If the new RecalcMode property is not manual). So you can see formula results on any Excel viewer, such as Microsoft Excel viewer or FlexCelPrintDocument. FOUR RECALCULATION MODES. Recalculation can be manual (similar to v3.0), Forced (always recalculate before saving), Smart (Recalculate before saving only if file has been modified) or OnEveryChange (recalculate after changing any cell). Also an new RecalcForced property allows you to specify Excel not to recalculate on open. PROTECTION AND ENCRYPTION. There is a new property "Protection" on XlsFile that allows to both read and write protected and encrypted xls files. See Protect demo for more details. ENHANCED DELPHI.NET SUPPORT. Added a new demo with Delphi.Net, and now all BDP datatypes are supported. NEW FUNCTIONS FOR REPORTING. Now you can use Sum, Average, Round, Abs, Ceiling, Floor, Exp, Int, Ln, Log, Log10, Pi, Power, Rand, Sign, Sqrt, Trunc, Count, Radians, Degrees, Sin, Cos, Tan, ASin, ACos, ATan, ATan2, SumIf, CountIf, Date, DateValue, Day, Month, Year, Time, TimeValue, Hour, Minute, Second, Now, Today, Error.Type, IsBlank, IsErr, IsError, IsLogical, IsNA, IsNonText, IsNumber, IsREF, IsText, Type, Na, Choose, Offset, HLookup and VLookup when creating expressions for a report. All of those functions will be evaluated when recalculating formulas too.For a complete list of supported functions, see EndUserGuide.pdf, "Evaluating Expressions" SMALL PERFORMANCE IMPROVEMENTS. FlexCel 3.1 is even faster than 3.0. Not much, because 3.0 was quite fast, but a little faster. New on v 3.0.0.5 - August 2004 - First public FlexCel for .NET version. Based in the FlexCel for Delphi 2.x version, this initial version includes reports and an extensive API to read or write xls files.
http://www.tmssoftware.biz/flexcel/doc/net/about/whatsnew.html
CC-MAIN-2019-04
en
refinedweb
From 1314c7c4aae6f39154744047754715a9d927c25f Mon Sep 17 00:00:00 2001 From: Allen Kerensky <allen@allenkerensky.com> Date: Sat, 9 Feb 2013 15:06:42 -0600 Subject: [PATCH] Additional ThreadPool worker and IOCP thread startup logic --- OpenSim/Region/Application/Application.cs | 45 ++++++++++++++++++++++++++----- 1 file changed, 39 insertions(+), 6 deletions(-) diff --git a/OpenSim/Region/Application/Application.cs b/OpenSim/Region/Application/Application.cs index 0f90d37..c3e7ec2 100644 --- a/OpenSim/Region/Application/Application.cs +++ b/OpenSim/Region/Application/Application.cs @@ -102,17 +102,50 @@ namespace OpenSim m_log.InfoFormat( "[OPENSIM MAIN]: Environment variable MONO_THREADS_PER_CPU is {0}", monoThreadsPerCpu ?? "unset"); - // Increase the number of IOCP threads available. Mono defaults to a tragically low number + // Verify the Threadpool allocates or uses enough worker and IO completion threads + // .NET 2.0 workerthreads default to 50 * numcores + // .NET 3.0 workerthreads defaults to 250 * numcores + // .NET 4.0 workerthreads are dynamic based on bitness and OS resources + // Max IO Completion threads are 1000 on all 3 CLRs. + int workerThreadsMin = 500; + int workerThreadsMax = 1000; // may need further adjustment to match other CLR + int iocpThreadsMin = 1000; + int iocpThreadsMax = 2000; // may need further adjustment to match other CLR int workerThreads, iocpThreads; System.Threading.ThreadPool.GetMaxThreads(out workerThreads, out iocpThreads); m_log.InfoFormat("[OPENSIM MAIN]: Runtime gave us {0} worker threads and {1} IOCP threads", workerThreads, iocpThreads); - if (workerThreads < 500 || iocpThreads < 1000) + if (workerThreads < workerThreadsMin) { - workerThreads = 500; - iocpThreads = 1000; - m_log.Info("[OPENSIM MAIN]: Bumping up to 500 worker threads and 1000 IOCP threads"); - System.Threading.ThreadPool.SetMaxThreads(workerThreads, iocpThreads); + workerThreads = workerThreadsMin; + m_log.InfoFormat("[OPENSIM MAIN]: Bumping up to worker threads to {0}",workerThreads); } + if (workerThreads > workerThreadsMax) + { + workerThreads = workerThreadsMax; + m_log.InfoFormat("[OPENSIM MAIN]: Limiting worker threads to {0}",workerThreads); + } + // Increase the number of IOCP threads available. + // Mono defaults to a tragically low number (24 on 6-core / 8GB Fedora 17) + if (iocpThreads < iocpThreadsMin) + { + iocpThreads = iocpThreadsMin; + m_log.InfoFormat("[OPENSIM MAIN]: Bumping up IO completion threads to {0}",iocpThreads); + } + // Make sure we don't overallocate IOCP threads and thrash system resources + if ( iocpThreads > iocpThreadsMax ) + { + iocpThreads = iocpThreadsMax; + m_log.InfoFormat("[OPENSIM MAIN]: Limiting IO completion threads to {0}",iocpThreads); + } + // set the resulting worker and IO completion thread counts back to ThreadPool + if ( System.Threading.ThreadPool.SetMaxThreads(workerThreads, iocpThreads) ) + { + m_log.InfoFormat("[OPENSIM MAIN]: Threadpool set to {0} worker threads and {1} IO completion threads", workerThreads, iocpThreads); + } + else + { + m_log.Info("[OPENSIM MAIN]: Threadpool reconfiguration failed, runtime defaults still in effect."); + } // Check if the system is compatible with OpenSimulator. // Ensures that the minimum system requirements are met -- 1.7.11.7
http://opensimulator.org/mantis/view.php?id=6537
CC-MAIN-2019-04
en
refinedweb
SOA and the XML Factor: Designing Service-Oriented Solutions with Extreme XML Compatibility SOA and the XML Factor: Designing Service-Oriented Solutions with Extreme XML Compatibility.. This article authored by Ronald Murphy and published at SOA Magazine on May 2009. Introduction: Why Compatibility Matters To make a change "backwards compatible," we try to ensure that the data and operations already being used will continue to work. If this doesn't happen, any existing coded uses will need to be updated - usually meaning rebuilding significant parts of the service logic. The first-order effects of this consequence are consumers that don't appreciate the inconvenience you have caused them - and this effect only gets worse with repetition! There are a number of second-order effects as well: The more different instances of services we combine together, the more complicated the compatibility matching game gets: To address compatibility breaks, you may have to distributed multiple versions of your service contract, and maintain compatibility matrixes of all the products you are associated with. Then, your regression tests and potential debug environments expand "combinatorially." Basic Compatibility Rules In the Internet era, we have had a chance to think about the backwards compatibility of distributed systems. During the development of Web architecture, some simple principles were gradually codified in various standards, most notably the "must-ignore" rule of HTTP and HTML [REF-1]. The blogosphere has further developed this into an informal mathematical theory [REF-2] and some practical guidelines for use in the XML and Web services world [REF-3]. The essential formula for ensuring technical contract compatibility is a set of expectations of both service providers and service consumers: For Web service developers, a useful strategy is to have your latest service version understand as much as possible of the syntax and semantics of its past versions. In fact, if the service knows the version of a requesting consumer, it can actually cater to that version, like a multilingual customer service representative. To the degree services keep knowledge of all available versions, we reduce the very need for must-ignore behavior on the service end, and that issue - and a lot of extended compatibility issues - are reduced to being a client-side problem. XML Parsing Strategies Service developers have a choice of approaches to XML parsing, which have at least two broad variations in style: Manual Handling A programmer writes code such as a DOM traversal to navigate data element by element. Data types are manually interpreted (Integer.parseInt(), etc.) based on documentation such as XML schema. (If you regard this as a primitive approach to be dismissed, consider the legions of Ajax, Perl, and PHP clients out in the world!) Followers of this style are advised to adhere to the must-ignore rule - don't throw errors if you find an unexpected element! The UPA (Unique Particle Attribution) The UPA constraint is a rule to keep XML Schema grammars unambiguous such that any given point in a parse stream is resolvable only by one XML Schema definition element or "particle" such as an "element" definition, a "choice" definition, an "any" definition, etc. The removal of ambiguity creates more rigorous grammars and avoids the need to build a "look-ahead" parser that considers the forward context of a parse in order to resolve particles that are ambiguous when considering only already received data. A typical ambiguity appears if you follow an optional element (such as ) with an "any" wildcard. If "myElement" appears in the output, it can match either the optional definition or the following "any". Human users usually have much less trouble with this situation than parsers. Automatic XML Schema-Driven Parsers Many parsers will match XML schema types to corresponding code-generated programming language objects. A document is automatically mapped to a tree of programming objects. In some platforms such as .NET, the parsers themselves honor the must-ignore rule. In other cases such as Apache Axis 1.x, they do not. Runtime XML validation often not applied since it must be specifically enabled in the parser and tends to slow performance. If validation is applied, typically the schema is evaluated strictly, and the must-ignore rule is not usually heeded at all - a potentially big problem for compatibility. An XML Schema is a type contract which defines the set of XML document values available to users (consumers) of the contract. In some settings, this is meant to be binding in the sense that any values outside this set should be rejected. But in Web services, it is best to regard XML Schema as specifying the values known to work for a given version of the schema consumer (a service consumer receiving a response, or a service provider receiving a request). The values of some other related XML schemas from a different version may still be compatible. In fact, it is possible to design schemas with the specific goal "extreme XML compatibility." The strictness of some XML parsers about extra data - not keeping to the must-ignore rule - can be one significant initial problem. A somewhat common workaround [REF-4] for this is to reserve wildcard placeholders (xsd:any); this practice is rather controversial and in particular is difficult to follow while keeping to the so-called UPA constraint of non-ambiguity [REF-5]. One radical approach is to discard the UPA constraint; there are also limited workarounds possible [REF-6]. What is the consequence of violating the UPA constraint? The constraint itself was controversial during development of XML schema; however it was adopted in the interest of enabling the greatest variety of parsers, and keeping parse logic simple and lightweight. In practice, violating the constraint does not seem to confuse any of the parsers that are in wide use. XML schema validators will complain, for example, .NET issues warnings for UPA constraint violations. However, even validators will often function, and validation is not a prominent implementation part of many real world Web service environments because of the performance cost of performing validation at runtime. In future implementations, this performance factor could shift, but the jury is still out. XML Schema Evolution Here, we examine the compatibility impact of various schema changes on consumer-to-service interactions. We assume that a service makes changes to a schema, evolving it to a latest version. The upcoming backwards compatibility assessments are from the point of the service contract, and refer to whether older consumers can successfully communicate with the new service, such that: Recommendations for Smooth XML Schema Evolution In considering the above guidelines for XML Schema compatibility, it is clear that additions (other than enumeration values) are generally quite safe, while most kinds of type changes, and most deletions, will cause backward incompatibility. This leads to a set of principles for extreme compatibility of XML Schema: Service Contract Evolution Models Armed with an understanding of how XML Schema compatibility works and the guidelines for maximizing compatibility, we turn now to interface evolution as a whole. The comments here actually apply beyond XML Schema based contracts such as those used by Web services; many contract such as APIs have similar concepts. The two most common service contract evolution models are the following: Ramp Each new feature is done following the basic compatibility rules. In general, the data, operations, and messages/events of the new feature are a superset of the old; older features, data, etc. are not retired from the service contract schema definitions. Eventually, certain older features may be rejected in logic using business errors, but syntactically the features are still there. Advantages of the ramp approach are: Staircase Periodically, an all new version of the service contract is developed. No features of the old contract are directly compatible with the new contract; you must upgrade and convert all your client logic to use the new contract. (Some or many types in the new contract version may have the same structure and meaning as those of the old version, but they are still considered as distinct types, and mapping is usually required to convert data of the old service contract's types to data of the new contract's types.) Advantages of the staircase approach: In practice, it is actually possible and common to combine the above two models: In many environments, some backward incompatible changes are at least occasionally needed (e.g. cleanups of deprecated types), but these are localized to only certain feature areas. This leads us to a refinement that fits our "extreme compatibility" slogan... Paddle Wheel On a given release point, some features are added backward compatibly, and a minimal set of features is changed backward incompatibly. The service smoothes out differences between versions by giving clients the data they need, giving an overall effect of backward compatibility. Particularly for larger service contracts, the paddle wheel strategy can be very effective: Versioning Enforcement Techniques for XML Schema Versioning enforcement is itself an aspect of compatibility. In the above strategies, we usually have the following goals: The following changes to Web services will cause enforcement to be strict: Renaming services or using new schema namespaces can be employed as a way of enforcing staircase evolution, if desired. Namespaces: A Closer Look In strict XML Schema doctrine, most changes to a type result in a different type - even additions, which are disruptive since there is no automatic wildcard or must-ignore provision for them. Some designers take this notion of type difference to an extreme, by either renaming each changed type within the same namespace, or transferring the latest type to a new namespace. There are some advantages to this approach - we'll look mainly at the specific practice of using namespaces to distinguish type versions: Unfortunately, there are some practical disadvantages: Because of this, we recommend that changing namespaces of types in a schema should only be used if you are pursuing a step evolution strategy - and hopefully your incompatible steps occur once a year or less. Conclusion In this article, we've looked at compatibility from a practical standpoint, with some definitions and rules that apply to XML Schema and Web services. Our analysis of schema evolution strategies led us to a form of "extreme XML compatibility", the paddle wheel strategy. Although other approaches can work well in your environment, the paddle wheel strategy seems to optimize a wide variety of trade-offs including overall backward compatibility of a service contract, the amount of effort required to maintain an evolving Web service, and ease of use for clients. References [REF-1] See overview of the historic rules in [REF-2] Three-part series starting with [REF-3] [REF-4] ; [REF-5]. [REF-6], section 7.4, and This article was originally published in The SOA Magazine (), a publication officially associated with "The Prentice Hall Service-Oriented Computing Series from Thomas Erl" (). Copyright ©SOA Systems Inc. () }}
https://dzone.com/articles/soa-and-xml-factor-designing
CC-MAIN-2019-04
en
refinedweb
UICollectionView DataSourcePrefetching Example and Explanation UICollectionView DataSourcePrefetching. You can find the source code on GitHub. Setting up the Xcode project. Create a new file, of type Cocoa Touch Class , subclass of UICollectionViewCell, name it collectionViewCell and assign it to the cell. Connect the UIImageView to the collectionViewCell.swift code and name it ‘foodImage‘. Also , connect the collectionView to ViewController.swift. Conforming to protocols and writing some lines Now , it’s time to write some code in viewController.swift. The first thing to do is to conform the class to UICollectionViewDelegate, UICollectionViewDataSource, and UICollectionViewDataSourcePrefetching protocols. You should probably see some errors showing up in your sidebar, due to required methods not being implemented yet. We’ll cover that soon, but let’s finish a little setup first. We’ll need a data source for this UICollectionView, so let’s just create an array called imageArray, which for illustrative purposes will store 30 images. Outside of any method, create the array like so: var imageArray = [UIImage?](repeating: nil, count: 30) and a variable to store the base url of a picture like so : var baseUrl = URL(string: "")! Same as above create another array of type URLSessionDataTask : var tasks = [URLSessionDataTask?](repeating: nil, count: 30) The next step is to create two separated functions in order to generate images from dynamically generated urls. The first function requires a parameter which in our case will be the index of each cell and will return an url. func urlComponents(index: Int) -> URL { var baseUrlComponents = URLComponents(url: baseUrl, resolvingAgainstBaseURL: true) baseUrlComponents?.path = "/\(screenSize.width)x\(screenSize.height * 0.3)" baseUrlComponents?.query = "text=food \(index)" return (baseUrlComponents?.url)! } The second one is where the downloading process will be executed and it requires the indexPath of each cell as a parameter. The return type of this function will be URLSessionDataTask. func getTask(forIndex: IndexPath) -> URLSessionDataTask { let imgURL = urlComponents(index: forIndex.row) return URLSession.shared.dataTask(with: imgURL) { data, response, error in guard let data = data, error == nil else { return } DispatchQueue.main.async() { let image = UIImage(data: data)! self.imageArray[forIndex.row] = image self.collectionView.reloadItems(at: [forIndex]) } } } Make sure you have added these lines in viewDidLoad() method: collectionView.dataSource = self collectionView.delegate = self collectionView.prefetchDataSource = self Implementing required methods Now , we must implement 3 required methods : collectionView(_:numberOfItemsInSection:) collectionView(_:cellForItemAt:) collectionView(_:prefetchItemsAt:) UICollectionView must know how many rows are going to be inside the section, so we’ll return the count of the elements in our array here: func collectionView(_ collectionView: UICollectionView, numberOfItemsInSection section: Int) -> Int { return imageArray.count } Before coming to the most important methods, there is only one more thing left to do. Let’s create a method where we can supervise the downloading process. Once again the parameter required is the indexPath of the cell. func requestImage(forIndex: IndexPath) { var task: URLSessionDataTask if imageArray[forIndex.row] != nil { // Image is already loaded return } if tasks[forIndex.row] != nil && tasks[forIndex.row]!.state == URLSessionTask.State.running { // Wait for task to finish return } task = getTask(forIndex: forIndex) tasks[forIndex.row] = task task.resume() } } Finally, is time to build and return each cell. So in this method : func collectionView(_ collectionView: UICollectionView, cellForItemAt indexPath: IndexPath) -> UICollectionViewCell { let foodCell = collectionView.dequeueReusableCell(withReuseIdentifier: "foodCell", for: indexPath) as! collectionViewCell if let img = imageArray[indexPath.row] { foodCell.foodImage.image = img } else { requestImage(forIndex: indexPath) } return foodCell } As we see the requestImage(forIndex: IndexPath) method helps us to observe if an image is already downloaded,is in process or is not downloaded at all. Prefetching In order for your app to have “buttery smooth” performance,.: public protocol UICollectionViewDataSourcePrefetching : NSObjectProtocol { // indexPaths are ordered ascending by geometric distance from the collection view @available(iOS 10.0, *) public func collectionView(_ collectionView: UICollectionView, prefetchItemsAt indexPaths: [IndexPath]) // indexPaths that previously were considered as candidates for pre-fetching, but were not actually used; may be a subset of the previous call to -collectionView:prefetchItemsAtIndexPaths: @available(iOS 10.0, *) optional public func collectionView(_ collectionView: UICollectionView, cancelPrefetchingForItemsAt indexPaths: [IndexPath]) } Implementing DataSource Prefetching The first and required method is called when the Collection View is ready to start building out cells even before they are on the screen. An array of NSIndexPath objects is passed to the function and is used to prepare the data source. In our code we would write something like this: func collectionView(_ collectionView: UICollectionView, prefetchItemsAt indexPaths: [IndexPath]) { for indexPath in indexPaths{ requestImage(forIndex: indexPath) } } Implementing Prefetch Cancellation As mentioned above, the second method, collectionView(_:cancelPrefetchingForItemsAt:),. So let’s implement the last function in our code: func collectionView(_ collectionView: UICollectionView, cancelPrefetchingForItemsAt indexPaths: [IndexPath]) { for indexPath in indexPaths{ if let task = tasks[indexPath.row] { if task.state != URLSessionTask.State.canceling { task.cancel() } } } } There may be times when you want to disable collection view prefetching. This can be done by setting the UICollectionView isPrefetchingEnabled property to false. These new functions can also be used in UITableView as well just by implementing UITableViewDataSourcePrefetching protocol. Conclusion Using this protocol and its functions is a good choice . There’s no more need to worry about the performance of cells as they are ready to come to the screen before being seen and with a smooth scrolling performance, our application feels better.
https://www.sitepoint.com/uicollectionview-datasourceprefetching-example-and-explanation/
CC-MAIN-2019-04
en
refinedweb
hey guy's I'm relatively new to C++ and I've been asked to make a Caesar Cipher. I'm supposed to take an encrypted .txt file and output it's decryption. I've sort of got an idea of my flow as shown below but I'm confused as to how to transform capitals in the encrypted .txt file into lower case so I can process them normally. I know I have to use "tolower", does that mean I have to include the header for it as well? Here's my code so far, am I on the right track? I'm not seeing any errors right now but I haven't debugged yet. /* Assignment 06- The Caesar Cipher Author- ****** Sources: References: void characterCount(char ch, int list[]); void calcShift(int& shift, int list[]); void writeOutput(ifstream &in, ofstream &out, int shift); */ #include <cstring> #include <string> #include <iomanip> #include <fstream> #include <iostream> using namespace std; const int CAPS_START = 65; const int LOWER_START = 97; int caps[26]; int lowerCase[26]; int main() { int maxIndex; char ch; int intChar; //open the file. //streaming.. fstream testFile; ifstream inFile; ofstream outFile; //Open input file inFile.open("C:/Users/Tommy/Documents/Visual Studio 2010/Projects/a06.cpp/encrypter.txt"); if(!inFile.is_open()) {//error trigger cout << "Error opening input file. Closing program."; //system("PAUSE"); system("PAUSE"); return(0); } outFile.open("encrypter.txt"); if(!outFile.is_open()) { //error trigger cout << "Error opening output file. Closing Program."; //system("PAUSE"); system("PAUSE"); return 0; } //read char by char to the end of the file. //decide if the char is upper or lower. intChar = static_cast<int>(ch); //Convert to int. //after counting all the letters, find the max. maxIndex = 0; for(int i = 1; i < 26; i++) { if(caps[maxIndex] > caps[i]) { maxIndex = caps[i]; } } system("PAUSE"); return 0; }
https://www.daniweb.com/programming/software-development/threads/396301/the-infamous-caesar-cipher-i-know-help-please
CC-MAIN-2019-04
en
refinedweb
Some time ago I was writing a Corporate System with lots of menu options, so it was not easy to visually keep track of implemented and pending options. As a former Visual C++ developer, I miss the MFC's main menu implementation, where non-implemented options appear disabled automatically. I have not reproduced the exactly MFC functionality, but extended the idea to color signalling, as shown in figures above. This provides my customer a clear idea of work progress. Optionally, the tool tips will show the associated method name for each menu option. For using the MenuDecorator class, you must follow two simple steps: ColorizeImplementedOptions()or EnableImplementedOptions() staticmethod, passing a reference to your MenuStripobject. The first method needs to be invoked passing two colors also for implemented and non-implemented options, as shown below, plus a boolean value to show methods' name in tooltips: public partial class MainForm : Form { // Main form constructor public MainForm() { InitializeComponent(); // this method is generated by Visual Studio IDE // Call static function, blue for implemented options, red for non implemented // and also show tooltips MenuDecorator.ColorizeImplementedOptions (this.MainMenu, Color.Blue, Color.Red, true); // etc... This will produce a result similar to Figure 1. The second static method, will not change option's colors, but will enable implemented options and disable others. Also you can specify to show the tool tips with associated method names. Here is an example of use: public partial class MainForm : Form { // Main form constructor public MainForm() { InitializeComponent(); // Method generated by Visual Studio IDE // Call static function, and show tooltips MenuDecorator.EnableImplementedOptions(this.MainMenu, true); // etc... The result will be similar to Figure 2. The core section of this little static class is a complex sequence of reflection methods to establish if a specific menu option has some Click event attached to it. It is called recursively, traversing all menu trees. Here is the code portion where Click event is evaluated for enabling or disabling: public class MenuDecorator { private const BindingFlags Flags = BindingFlags.Static | BindingFlags.Instance | BindingFlags.NonPublic | BindingFlags.FlattenHierarchy; private static FieldInfo ClickInfo = typeof(ToolStripMenuItem).GetField("EventClick", Flags); // ... private static void EnableMenuItem(ToolStripMenuItem _item) { PropertyInfo events = _item.GetType().GetProperty("Events", Flags); if (_item.HasDropDownItems) { _item.Enabled = true; // Always enable options with children foreach (ToolStripItem dropitem in _item.DropDownItems) { // recursively search child options if (dropitem.GetType() == typeof(ToolStripMenuItem)) EnableMenuItem((ToolStripMenuItem)dropitem); } } else { // 'handlers' will be null if there are no events for this menu option EventHandlerList handlers = (EventHandlerList)events.GetValue(_item, null); Delegate d = handlers[ClickInfo.GetValue(_item)]; if (_showTips) _item.ToolTipText = d == null ? "[empty]" : d.Method.Name; _item.Enabled = !object.Equals(d, null); } } The solution file included with this article, has been produced with Visual Studio 2008, so you won't be able to load directly from Visual Studio 2005, but you can create a new solution and attach the project file manually. General News Question Answer Joke Rant Admin
http://www.codeproject.com/KB/menus/MenuDecorator.aspx
crawl-002
en
refinedweb
This is the .NET version of my previous MFC article, CRadioListBox: A ListBox with Radio Buttons. A couple of years ago, I discussed in a Visual C++ forum about a member's request to implement a custom ListBox control similar to MFC's CCheckListBox, but with radio buttons. Initially it appeared to be trivial, since the ListBox control's unique selection version complies with the requirements, but I have concluded that this control has some advantages: To implement RadioListBox into your project, you just need to do a few steps: RadioListBoxobject into your form. ListBox. ListBox, transparent BackColorproperty is allowed. That's all! Now you can use the radio button collection as a regular ListBox. You can add items with the Items.Add() method and query for user selection with the SelectedIndex property. Some .NET controls accept a transparent color as a BackColor property, but ListBox is not one of them. So, transparency requires lots of non-managed tricks. However, transparency is a key feature needed for this control to be useful. It allows the control to acquire a real radio button look and feel, as you can see in the screenshot above. I decided to stay in the managed world by providing fake transparency to the control by overriding the BackColor property to accept it, and saving its own background color brush. When setting the background color to transparent, the control will mimic the parent form or control, even if the form has a non-standard background color. The RadioListBox class is derived from Windows Forms' ListBox class with the owner-draw feature. The resumed class definition is the following: using System.ComponentModel; using System.Drawing; using System.Windows.Forms.VisualStyles; namespace System.Windows.Forms { public class RadioListBox : ListBox { private StringFormat Align; private bool IsTransparent = false; // Handles the transparent state private Brush BackBrush; // Manages its own background brush // Allows the BackColor to be transparent public override Color BackColor ... // Hides these properties in the designer [Browsable(false)] public override DrawMode DrawMode ... [Browsable(false)] public override SelectionMode SelectionMode ... // Public constructor public RadioListBox() ... // Main painting method protected override void OnDrawItem(DrawItemEventArgs e) ... // Prevent background erasing protected override void DefWndProc(ref Message m) ... // Other event handlers protected override void OnHandleCreated(EventArgs e) ... protected override void OnFontChanged(EventArgs e) ... protected override void OnParentChanged(EventArgs e) ... protected override void OnParentBackColorChanged(EventArgs e) ... } } The core enhancement is at the OnDrawItem() method. The method does not highlight the selected item as in a standard ListBox control, but draws a radio button instead. It also manages the focus state to draw the focus rectangle properly and the background color according to the transparency attribute. Here is the C# source code: // Main painting method protected override void OnDrawItem(DrawItemEventArgs e) { int maxItem = this.Items.Count - 1; if (e.Index < 0 || e.Index > maxItem) { // Erase all background if control has no items e.Graphics.FillRectangle(BackBrush, this.ClientRectangle); return; } int size = e.Font.Height; // button size depends on font height, not on item height // Calculate bounds for background, if last item paint up to bottom of control Rectangle backRect = e.Bounds; if (e.Index == maxItem) backRect.Height = this.ClientRectangle.Top + this.ClientRectangle.Height - e.Bounds.Top; e.Graphics.FillRectangle(BackBrush, backRect); // Determines text color/brush Brush textBrush; bool isChecked = (e.State & DrawItemState.Selected) == DrawItemState.Selected; RadioButtonState state = isChecked ? RadioButtonState.CheckedNormal : RadioButtonState.UncheckedNormal; if ((e.State & DrawItemState.Disabled) == DrawItemState.Disabled) { textBrush = SystemBrushes.GrayText; state = isChecked ? RadioButtonState.CheckedDisabled : RadioButtonState.UncheckedDisabled; } else if ((e.State & DrawItemState.Grayed) == DrawItemState.Grayed) { textBrush = SystemBrushes.GrayText; state = isChecked ? RadioButtonState.CheckedDisabled : RadioButtonState.UncheckedDisabled; } else { textBrush = SystemBrushes.FromSystemColor(this.ForeColor); } // Determines bounds for text and radio button Size glyphSize = RadioButtonRenderer.GetGlyphSize(e.Graphics, state); Point glyphLocation = e.Bounds.Location; glyphLocation.Y += (e.Bounds.Height - glyphSize.Height) / 2; Rectangle bounds = new Rectangle(e.Bounds.X + glyphSize.Width, e.Bounds.Y, e.Bounds.Width - glyphSize.Width, e.Bounds.Height); // Draws the radio button RadioButtonRenderer.DrawRadioButton(e.Graphics, glyphLocation, state); // Draws the text // Bound Datatable? Then show the column written in Displaymember if (!string.IsNullOrEmpty(DisplayMember)) e.Graphics.DrawString(((System.Data.DataRowView)this.Items[e.Index]) [this.DisplayMember].ToString(), e.Font, textBrush, bounds, this.Align); else e.Graphics.DrawString(this.Items[e.Index].ToString(), e.Font, textBrush, bounds, this.Align); // If the ListBox has focus, draw a focus rectangle around the selected item. e.DrawFocusRectangle(); } General News Question Answer Joke Rant Admin
http://www.codeproject.com/kb/combobox/RadioListBoxDotNetVersion.aspx
crawl-002
en
refinedweb
Part 1: Neurons and simple neural networks Introduction In this handout we cover the first steps in using PyNEST to simulate neuronal networks. When you have worked through this material, you will know how to: - start PyNEST - create neurons and stimulating/recording devices - query and set their parameters - connect them to each other or to devices - simulate the network - extract the data from recording devices For more information on the usage of PyNEST, please see the other sections of this primer: - Part 2: Populations of neurons - Part 3: Connecting networks with synapses - Part 4: Topologically structured networks More advanced examples can be found at Example Networks, or have a look at at the source directory of your NEST installation in the subdirectory: pynest/examples/. PyNEST - an interface to the NEST simulator Figure 1: The Python interpreter imports NEST as a module and dynamically loads the NEST simulator kernel ( pynestkernel.so). The core functionality is defined in hl_api.py. A simulation script of the user ( mysimulation.py) uses functions defined in this high-level API. These functions generate code in SLI (Simulation Language Interpreter), the native language of the interpreter of NEST. This interpreter, in turn, controls the NEST simulation kernel. The NEural Simulation Tool (NEST:) is designed for the simulation of large heterogeneous networks of point neurons. It is open source software released under the GPL licence. The simulator comes with an interface to Python [4]. Fig. 1 illustrates the interaction between the user’s simulation script ( mysimulation.py) and the NEST simulator. [2] contains a technically detailed description of the implementation of this interface and parts of this text are based on this reference. The simulation kernel is written in C++ to obtain highest possible performance for the simulation. You can use PyNEST interactively from the Python prompt or from within ipython. This is very helpful when you are exploring PyNEST, trying to learn a new functionality or debugging a routine. Once out of the exploratory mode, you will find it saves a lot of time to write your simulations in text files. These can in turn be run from the command line or from the Python or ipython prompt. Whether working interactively, semi-interactively, or purely executing scripts, the first thing that needs to happen is importing NEST’s functionality into the Python interpreter. import nest As with every other module for Python, the available functions can be prompted for. dir(nest) One such command is nest.Models(), which will return a list of all the available models you can use. If you want to obtain more information about a particular command, you may use Python’s standard help system. nest.Models? This will return the help text (docstring) explaining the use of this particular function. There is a help system within NEST as well. You can open the help pages in a browser using nest.helpdesk() and you can get the help page for a particular object using nest.help(object). Creating Nodes A neural network in NEST consists of two basic element types: nodes and connections. Nodes are either neurons, devices or sub-networks. Devices are used to stimulate neurons or to record from them. Nodes can be arranged in sub-networks to build hierarchical networks such as layers, columns, and areas - we will get to this later in the course. For now we will work in the default sub-network which is present when we start NEST, known as the root node. To begin with, the root sub-network is empty. New nodes are created with the command Create, which takes as arguments the model name of the desired node type, and optionally the number of nodes to be created and the initialising parameters. The function returns a list of handles to the new nodes, which you can assign to a variable for later use. These handles are integer numbers, called ids. Many PyNEST functions expect or return a list of ids (see Sec.8). Thus it is easy to apply functions to large sets of nodes with a single function call. After having imported NEST and also the Pylab interface to Matplotlib [[3]] (#3), which we will use to display the results, we can start creating nodes. As a first example, we will create a neuron of type iaf_psc_alpha. This neuron is an integrate-and-fire neuron with alpha-shaped postsynaptic currents. The function returns a list of the ids of all the created neurons, in this case only one, which we store in a variable called neuron. import pylab import nest neuron = nest.Create("iaf_psc_alpha") We can now use the id to access the properties of this neuron. Properties of nodes in NEST are generally accessed via Python dictionaries of key-value pairs of the form {key: value}. In order to see which properties a neuron has, you may ask it for its status. nest.GetStatus(neuron) This will print out the corresponding dictionary in the Python console. Many of these properties are not relevant for the dynamics of the neuron. To find out what the interesting properties are, look at the documentation of the model through the helpdesk. If you already know which properties you are interested in, you can specify a key, or a list of keys, as an optional argument to GetStatus: nest.GetStatus(neuron, "I_e") nest.GetStatus(neuron, ["V_reset", "V_th"]) In the first case we query the value of the constant background current I_e; the result is given as a tuple with one element. In the second case, we query the values of the reset potential and threshold of the neuron, and receive the result as a nested tuple. If GetStatus is called for a list of nodes, the dimension of the outer tuple is the length of the node list, and the dimension of the inner tuples is the number of keys specified. To modify the properties in the dictionary, we use SetStatus. In the following example, the background current is set to 376.0pA, a value causing the neuron to spike periodically. nest.SetStatus(neuron, {"I_e": 376.0}) Note that we can set several properties at the same time by giving multiple comma separated key:value pairs in the dictionary. Also be aware that NEST is type sensitive - if a particular property is of type double, then you do need to explicitly write the decimal point: nest.SetStatus(neuron, {"I_e": 376}) will result in an error. This conveniently protects us from making integer division errors, which are hard to catch. Next we create a multimeter, a device we can use to record the membrane voltage of a neuron over time. We set its property withtime such that it will also record the points in time at which it samples the membrane voltage. The property record_from expects a list of the names of the variables we would like to record. The variables exposed to the multimeter vary from model to model. For a specific model, you can check the names of the exposed variables by looking at the neuron’s property recordables. multimeter = nest.Create("multimeter") nest.SetStatus(multimeter, {"withtime":True, "record_from":["V_m"]}) We now create a spikedetector, another device that records the spiking events produced by a neuron. We use the optional keyword argument params to set its properties. This is an alternative to using SetStatus. The property withgid indicates whether the spike detector is to record the source id from which it received the event (i.e. the id of our neuron). spikedetector = nest.Create("spike_detector", params={"withgid": True, "withtime": True}) A short note on naming: here we have called the neuron neuron, the multimeter multimeter and so on. Of course, you can assign your created nodes to any variable names you like, but the script is easier to read if you choose names that reflect the concepts in your simulation. Connecting nodes with default connections Now we know how to create individual nodes, we can start connecting them to form a small network. nest.Connect(multimeter, neuron) nest.Connect(neuron, spikedetector) Figure 2: A Membrane potential of integrate-and-fire neuron with constant input current. B Spikes of the neuron. The order in which the arguments to Connect are specified reflects the flow of events: if the neuron spikes, it sends an event to the spike detector. Conversely, the multimeter periodically sends requests to the neuron to ask for its membrane potential at that point in time. This can be regarded as a perfect electrode stuck into the neuron. Now we have connected the network, we can start the simulation. We have to inform the simulation kernel how long the simulation is to run. Here we choose 1000ms. nest.Simulate(1000.0) Congratulations, you have just simulated your first network in NEST! Extracting and plotting data from devices After the simulation has finished, we can obtain the data recorded by the multimeter. dmm = nest.GetStatus(multimeter)[0] Vms = dmm["events"]["V_m"] ts = dmm["events"]["times"] In the first line, we obtain the list of status dictionaries for all queried nodes. Here, the variable multimeter is the id of only one node, so the returned list just contains one dictionary. We extract the first element of this list by indexing it (hence the [0] at the end). This type of operation occurs quite frequently when using PyNEST, as most functions are designed to take in and return lists, rather than individual values. This is to make operations on groups of items (the usual case when setting up neuronal network simulations) more convenient. This dictionary contains an entry named events which holds the recorded data. It is itself a dictionary with the entries V_m and times, which we store separately in Vms and ts, in the second and third line, respectively. If you are having trouble imagining dictionaries of dictionaries and what you are extracting from where, try first just printing dmm to the screen to give you a better understanding of its structure, and then in the next step extract the dictionary events, and so on. Now we are ready to display the data in a figure. To this end, we make use of pylab. import pylab pylab.figure(1) pylab.plot(ts, Vms) The second line opens a figure (with the number 1), the third line clears the window and the fourth line actually produces the plot. You can’t see it yet because we have not used pylab.show(). Before we do that, we proceed analogously to obtain and display the spikes from the spike detector. dSD = nest.GetStatus(spikedetector,keys="events")[0] evs = dSD["senders"] ts = dSD["times"] pylab.figure(2) pylab.plot(ts, evs, ".") pylab.show() Here we extract the events more concisely by using the optional keyword argument keys to GetStatus. This extracts the dictionary element with the key events rather than the whole status dictionary. The output should look like Fig. 2. If you want to execute this as a script, just paste all lines into a text file named, say, one-neuron.py . You can then run it from the command line by prefixing the file name with python, or from the Python or ipython prompt, by prefixing it with run. It is possible to collect information of multiple neurons on a single multimeter. This does complicate retrieving the information: the data for each of the n neurons will be stored and returned in an interleaved fashion. Luckily Python provides us with a handy array operation to split the data easily: array slicing with a step (sometimes called stride). To explain this you have to adapt the model created in the previous part. Save your code under a new name, in the next section you will also work on this code. Create an extra neuron with the background current given a different value: neuron2 = nest.Create("iaf_neuron") nest.SetStatus(neuron2 , {"I_e": 370.0}) now connect this newly created neuron to the multimeter: nest.Connect(multimeter, neuron2) Run the simulation and plot the results, they will look incorrect. To fix this you must plot the two neuron traces separately. Replace the code that extracts the events from the multimeter with the following lines. pylab.figure(2) Vms1 = dmm["events"]["V_m"][::2] # start at index 0: till the end: each second entry ts1 = dmm["events"]["times"][::2] pylab.plot(ts1, Vms1) Vms2 = dmm["events"]["V_m"][1::2] # start at index 1: till the end: each second entry ts2 = dmm["events"]["times"][1::2] pylab.plot(ts2, Vms2) Additional information can be found at < .0/reference/arrays.indexing.html>. Connecting nodes with specific connections A commonly used model of neural activity is the Poisson process. We now adapt the previous example so that the neuron receives 2 Poisson spike trains, one excitatory and the other inhibitory. Hence, we need a new device, the poisson_generator. After creating the neurons, we create these two generators and set their rates to 80000Hz and 15000Hz, respectively. noise_ex = nest.Create("poisson_generator") noise_in = nest.Create("poisson_generator") nest.SetStatus(noise_ex, {"rate": 80000.0}) nest.SetStatus(noise_in, {"rate": 15000.0}) Additionally, the constant input current should be set to 0: nest.SetStatus(neuron, {"I_e": 0.0}) Each event of the excitatory generator should produce a postsynaptic current of 1.2pA amplitude, an inhibitory event of -2.0pA. The synaptic weights can be defined in a dictionary, which is passed to the Connect function using the keyword syn_spec (synapse specifications). In general all parameters determining the synapse can be specified in the synapse dictionary, such as "weight", "delay", the synaptic model ( "model") and parameters specific to the synaptic model. syn_dict_ex = {"weight": 1.2} syn_dict_in = {"weight": -2.0} nest.Connect([noise[0]], neuron, syn_spec=syn_dict_ex) nest.Connect([noise[1]], neuron, syn_spec=syn_dict_in) Figure 3: A Membrane potential of integrate-and-fire neuron with Poisson noise as input. B Spikes of the neuron. The rest of the code remains as before. You should see a membrane potential as in Fig. 3. In the next part of the introduction (Part 2: Populations of neurons) we will look at more methods for connecting many neurons at once. Two connected neurons Figure 4: Postsynaptic potentials in neuron2 evoked by the spikes of neuron1 There is no additional magic involved in connecting neurons. To demonstrate this, we start from our original example of one neuron with a constant input current, and add a second neuron. import pylab import nest neuron1 = nest.Create("iaf_psc_alpha") nest.SetStatus(neuron1, {"I_e": 376.0}) neuron2 = nest.Create("iaf_psc_alpha") multimeter = nest.Create("multimeter") nest.SetStatus(multimeter, {"withtime":True, "record_from":["V_m"]} We now connect neuron1 to neuron2, and record the membrane potential from neuron2 so we can observe the postsynaptic potentials caused by the spikes of neuron1. nest.Connect(neuron1, neuron2, syn_spec = {"weight":20.0}) nest.Connect(multimeter, neuron2) Here the default delay of 1ms was used. If the delay is specified in addition to the weight, the following shortcut is available: nest.Connect(neuron1, neuron2, syn_spec={"weight":20, "delay":1.0}) If you simulate the network and plot the membrane potential as before, you should then see the postsynaptic potentials of neuron2 evoked by the spikes of neuron1 as in Fig. 4. Command overview These are the functions we introduced for the examples in this handout; the following sections of this introduction will add more. Getting information about NEST. helpdesk(browser="firefox"): Opens the NEST documentation pages in the given browser. help(obj=None,pager="less"): Opens the help page for the given object. Nodes Create(model, n=1, params=None): Create ninstances of type modelin the current sub-network. Parameters for the new nodes can be given as params(a single dictionary, or a list of dictionaries with size n). If omitted, the model’s defaults are used. GetStatus(nodes, keys=None): Return a list of parameter dictionaries for the given list of nodes. If keysis given, a list of values is returned instead. keysmay also be a list, in which case the returned list contains lists of values. SetStatus(nodes, params, val=None): Set the parameters of the given nodesto params, which may be a single dictionary, or a list of dictionaries of the same size as nodes. If valis given, paramshas to be the name of a property, which is set to valon the nodes. valcan be a single value, or a list of the same size as nodes. Connections This is an abbreviated version of the documentation for the Connect function, please see NEST’s online help for the full version and Connection Management for an introduction and worked examples. Connect(pre, post, conn_spec=None, syn_spec=None, model=None): Connect pre neurons to post neurons.Neurons in pre and post are connected using the specified connectivity ( "one_to_one"by default) and synapse type ( "static_synapse"by default). Details depend on the connectivity rule. Note: Connect does not iterate over subnets, it only connects explicitly specified nodes. pre- presynaptic neurons, given as list of GIDs post- presynaptic neurons, given as list of GIDs conn_spec- name or dictionary specifying connectivity rule, see below syn_spec- name or dictionary specifying synapses, see below Connectivity Connectivity is either specified as a string containing the name of a connectivity rule (default: "one_to_one") or as a dictionary specifying the rule and rule-specific parameters (e.g. "indegree"), which must be given. In addition switches allowing self-connections ( "autapses", default: True) and multiple connections between a pair of neurons ( "multapses", default: True) can be contained in the dictionary. Synapse The synapse model and its properties can be inserted either as a string describing one synapse model (synapse models are listed in the synapsedict) or as a dictionary as described below. If no synapse model is specified the default model "static_synapse" will be used. Available keys in the synapse dictionary are "model", "weight", "delay", "receptor_type" and parameters specific to the chosen synapse model. All parameters are optional and if not specified will use the default values determined by the current synapse model. "model" determines the synapse type, taken from pre-defined synapse types in NEST or manually specified synapses created via CopyModel(). All other parameters can be scalars or distributions. In the case of scalar parameters, all keys take doubles except for "receptor_type" which has to be initialised with an integer. Distributed parameters are initialised with yet another dictionary specifying the distribution ( "distribution", such as "normal") and distribution-specific paramters (such as "mu" and "sigma"). Simulation control Simulate(t): Simulate the network for tmilliseconds. References [1] Marc-Oliver Gewaltig and Markus Diesmann. (NEural Simulation Tool). , 2(4):1430, 2007. [2] J. M. Eppler, M. Helias, E. Muller, M. Diesmann, and M. Gewaltig. : a convenient interface to the NEST simulator. , 2:12, 2009. [3] John D. Hunter. Matplotlib: A 2d graphics environment., 9(3):90–95, 2007. [4] Python Software Foundation. The Python programming language, 2008..
http://nest-simulator.org/part-1-neurons-and-simple-neural-networks/
CC-MAIN-2017-30
en
refinedweb
0 I made myself a project which is to make a virtual inventory but I have been having difficulty with pickling the variable: inventory. Could someone please insert the strip of code in order to save the variable in a seperate file and make it able to import it. import pickle import time def intro(): print("\t\t\tWelcome to inventory 1.0!") print("\t\t Explore the wonders of the inventory.") intro() class inv(object): amount=0 def update(): print("\nYou have:") def add(inventory): if invspace > len(inventory): print("What would you like to add?") inventory.append(input("Item: ")) print("You have:") for item in inventory: print(item) time.sleep(0.2) print("\n\n") else: print("You dont have enough space!") def remove(inventory): dec=None rit=None end=int(len(inventory)) print (end) if len(inventory) > 0: print("What item would you like to remove?") while dec not in range(0,end): try: dec=int(input("Item. "))-1 if dec not in range(0,end): if dec > 6: print("You only have 8 inventory slots.") else: print("You dont have an item",dec+1,".") except ValueError: print("You must type a number.") rit=inventory[dec] inventory.remove(inventory[dec]) print("The item",rit,"has been removed.\n\n") else: print("You do not have any items in your inventory!\n\n") def empty(inventory): if len(inventory) > 0: print("Are you sure you would like to empty your inventory?") dec=None while not dec in ("yes","no"): dec=input("<yes/no>: ") if dec==("yes"): inventory=[] print("Your inventory has been emptied.\n\n") return inventory else: print("You do not have any items in your inventory!\n\n") def visrep(inventory): print(" --- --- --- ") print("| | | |") print(" --- --- --- ") print(" --- --- --- ") print("| | | |") print(" --- --- --- ") print(" --- --- --- ") print("| | | |") print(" --- --- --- \n\n") def save(inventory): inventory=inventory print("Pickle here") invspace=5 print("import pickle here") inventory=[] con=1 def decl(): print("1 - Add item |") print("2 - Remove item |") print("3 - Empty inventory |") print("4 - Visual representation |") print("5 - Save") print("6 - Quit |") print("--------------------------\n\n") while con==1: print("What would you like to do?") decl() dec=None while not dec in range(1,7): try: dec=int(input("<1-6>: ")) if not dec in range(1,7): print("Enter a number between 1 and 6") except ValueError: print("Enter a number between 1 and 6") if dec==1: inv.add(inventory) if dec==2: inv.remove(inventory) if dec==3: inv.empty(inventory) if dec==4: inv.visrep(inventory) if dec==5: inv.save(inventory) if dec==6: import sys raise SystemExit
https://www.daniweb.com/programming/software-development/threads/358801/ultimate-pickle-need-help
CC-MAIN-2017-30
en
refinedweb
Best Practice Ruby on Rails Refactoring: Databases - AntiPattern: Messy Migrations - AntiPattern: Wet Validations With the Rails framework providing a simple ORM that abstracts many of the database details away from the developer, the database is an afterthought for many Rails developers. While the power of the framework has made this okay to a certain extent, there are important database and Rails-specific considerations that you shouldn’t overlook. AntiPattern: Messy Migrations Ruby on Rails database migrations were an innovative solution to a real problem faced by developers: How to script changes to the database so that they could be reliably replicated by the rest of the team on their development machines as well as deployed to the production servers at the appropriate time. Before Rails and its baked-in solution, developers often wrote ad hoc database change scripts by hand, if they used them at all. However, as with most other improvements, database migrations are not without pain points. Over time, a database migration can become a tangle of code that can be intimidating to work with rather than the joy it should be. By strictly keeping in mind the following solutions, you can overcome these obstacles and ensure that your migrations never become irreconcilably messy. Solution: Never Modify the up Method on a Committed Migration Database migrations enable you to reliably distribute database changes to other members of your team and to ensure that the proper changes are made on your server during deployment. If you commit a new migration to your source code repository, unless there are irreversible bugs in the migration itself, you should follow the practice of never modifying that migration. A migration that has already been run on another team member’s computer or the server will never automatically be run again. In order to run it again, a developer must go through an orchestrated dance of backing the migration down and then up again. It gets even worse if other migrations have since been committed, as that could potentially cause data loss. Yes, if you’re certain that a migration hasn’t been run on the server, then it’s possible to communicate to the rest of the team that you’ve changed a migration and have them re-migrate their database or make the required changes manually. However, that’s not an effective use of their time, it creates headaches, and it’s error prone. It’s simply best to avoid the situation altogether and never modify the up method of a migration. Of course, there will be times when you’ve accidentally committed a migration that has an irreversible bug in it that must be fixed. In such circumstances, you’ll have no choice but to modify the migration to fix the bug. Ideally, the times when this happen are few and far between. In order to reduce the chances of this happening, you should always be sure to run the migration and inspect the results to ensure accuracy before committing the migration to your source code repository. However, you shouldn’t limit yourself to simply running the migration. Instead, you should run the migration and then run the down of the migration and rerun the up. Rails provides rake tasks for doing this: rake db:migrate rake db:migrate:redo The rake db:migrate:redo command runs the down method on the last migration and then reruns the up method on that migration. This ensures that the entire migration runs in both directions and is repeatable, without error. Once you’ve run this and double-checked the results, you can commit your new migration to the repository with confidence. Solution: Never Use External Code in a Migration Database migrations are used to manage database change. When the structure of a database changes, very often the data in the database needs to change as well. When this happens, it’s fairly common to want to use models inside the migration itself, as in the following example: class AddJobsCountToUser < ActiveRecord::Migration def self.up add_column :users, :jobs_count, :integer, :default => 0 Users.all.each do |user| user.jobs_count = user.jobs.size user.save end end def self.down remove_column :users, :jobs_count end end In this migration above, you’re adding a counter cache column to the users table, and this column will store the number of jobs each user has posted. In this migration, you’re actually using the User model to find all users and update the column of each one. There are two problems with this approach. First, this approach performs horribly. The code above loads all the users into memory and then for each user, one at a time, it finds out how many jobs each has and updates its count column. Second, and more importantly, this migration does not run if the model is ever removed from the application, becomes unavailable, or changes in some way that makes the code in this migration no longer valid. The code in migrations is supposed to be able to be run to manage change in the database, in sequence, at any time. When external code is used in a migration, it ties the migration code to code that is not bound by these same rules and can result in an unrunnable migration. Therefore, it’s always best to use straight SQL whenever possible in your migrations. If you do so, you can rewrite the preceding migration as follows: class AddJobsCountToUser < ActiveRecord::Migration def self.up add_column :users, :jobs_count, :integer, :default => 0 update(<<-SQL) UPDATE users SET jobs_count = ( SELECT count(*) FROM jobs WHERE jobs.user_id = users.id ) SQL end def self.down remove_column :users, :jobs_count end end When this migration is rewritten using SQL directly, it has no external dependencies beyond the exact state of the database at the time the migration should be executed. There may be cases in which you actually do need to use a model or other Ruby code in a migration. In such cases, the goal is to rely on no external code in your migration. Therefore, all code that’s needed, including the model, should be defined inside the migration itself. For example, if you really want to use the User model in the preceding migration, you rewrite it like the following: class AddJobsCountToUser < ActiveRecord::Migration class Job < ActiveRecord::Base end class User < ActiveRecord::Base has_many :jobs end def self.up add_column :users, :jobs_count, :integer, :default => 0 User.reset_column_information Users.all.each do |user| user.jobs_count = user.jobs.size user.save end end def self.down remove_column :users, :jobs_count end end Since this migration defines both the Job and User models, it no longer depends on an external definition of those models being in place. It also defines the has_many relationship between them and therefore defines everything it needs to run successfully. In addition, note the call to User.reset_column_information in the self.up method. When models are defined, Active Record reads the current database schema. If your migration changes that schema, calling the reset_column_information method causes Active Record to re-inspect the columns in the database. You can use this same technique if you must calculate the value of a column by using an algorithm defined in your application. You cannot rely on the definition of that algorithm to be the same or even be present when the migration is run. Therefore, the algorithm should be duplicated inside the migration itself. Solution: Always Provide a down Method in Migrations It’s very important that a migration have a reliable self.down defined that actually reverses the migration. You never know when something is going to be rolled back. It’s truly bad practice to not have this defined or to have it defined incorrectly. Some migrations simply cannot be fully reversed. This is most often the case for migrations that change data in a destructive manner. If this is the case for a migration for which you’re writing the down method, you should do the best reversal you can do. If you are in a situation where there is a migration that under no circumstances can ever be reversed safely, you should raise an ActiveRecord::IrreversibleMigration exception, as shown here: def self.down raise ActiveRecord::IrreversibleMigration end Raising this exception causes migrations to be stopped when this down method is run. This ensures that the developer running the migrations understands that there is something irreversible that has been done and that cannot be undone without manual intervention. Once you have the down method defined, you should run the migration in both directions to ensure proper functionality. As discussed earlier in this chapter, in the section “Solution: Never Modify the up Method on a Committed Migration,” Rails provides rake tasks for doing this: rake db:migrate rake db:migrate:redo The rake db:migrate:redo command runs the down method on the last migration and then reruns the up method on that migration.
http://www.informit.com/articles/article.aspx?p=1652025
CC-MAIN-2017-30
en
refinedweb