上海品茶夜网

Ancients humans played team sports to test hunting skills: Study

Posted on

first_imgWashington, Jul 1 (PTI) Competitive team games may have originated among hunter-gatherers, to help men to hone their physical skills and stamina, and see how each performs under pressure, a study has found.Competitive team games in which men test their mettle against others are universal across the world, and may have deep roots in our evolutionary past. The games helped men hone their skills, assess the commitment of their team members, and see how each performs under pressure. All these activities suggest motivation to practise skills involved in lethal raiding, said Michelle Scalise Sugiyama of the University of Oregon in the US.Play behaviour in humans and other animals is thought to have evolved as a way to develop, rehearse, and refine skills that are critical for survival or reproduction.Chase games, for instance, build stamina and speed, which is helpful for evading predators. Similarly, play fighting is believed to develop skills used in actual fighting. Although many animals play fight, only people do so in teams.The findings, published in the journal Human Nature, suggest that team play fighting is not a recent invention of agricultural societies.For the purposes of this study, researchers analysed how widespread indigenous forms of coalitional play fighting were among hunter-gatherer societies, and whether these games rehearse motor skills used in lethal raiding. This type of play involves the use of coordinated action and non-lethal physical force by two opposing teams, each of which attempts to attain a predetermined physical objective, such as scoring a goal, while preventing their opponents from doing the same.advertisementThey analysed the early ethnographic records of societies described as hunter-gatherers in Murdock’s Ethnographic Atlas. Although play (or its absence) was not commonly or extensively documented by early ethnographers, researchers found information about hunter-gatherer team contact games for 46 of the 100 culture regions in the atlas that contain hunter-gatherer societies.Activities using sticks to hit objects (and sometimes people) were the most common game type, followed by games involving kicking and games similar to rugby. The researchers also found many instances of activities involving running, grappling, parrying and throwing. These physical skills mirror those used by hunter-gatherers when raiding other groups. Coalitional play fighting may have served as a practice ground for learning how to coordinate striking, blocking, kicking, dodging and projectile-throwing maneuvers amongst coalition members, all in an effort to increase the chances of success and reduce the chances of injury during potentially lethal raids.”Interestingly, mock warfare was found in 39 per cent of culture clusters and boys’ mock warfare in 26 per cent. This suggests that motivation to engage in coalitional play fighting emerges in childhood,” said Sugiyama.The safe confines of a game did not only have physical benefits but provided an opportunity to work as a team. Men learned to anticipate, monitor and strategically respond to the actions of their opponents, and continuously assess situations as both sides tired or lost combatants.”Periodic participation in such games during childhood, adolescence, and early to middle adulthood provides individuals with opportunities to viscerally assess the aggressive formidability and commitment of their own and – when played with neighbouring groups – other coalitions as their composition and skills change through time,” said Sugiyama. “The widespread evidence for such games among hunter-gather societies suggests that the motivation to engage in them is a universal feature of human psychology, generating behaviour that develops, rehearses, and refines the coalitional combat skills used in lethal raiding,” she said. PTI MHN MHNMHNlast_img read more

Industry Minister Welcomes Lumber Treatment Plant

Posted on

first_imgMinister of Industry, Investment and Commerce, Hon. Anthony Hylton, says the newly opened ARC Lumber Treatment Plant, is a welcome investment that will help to further grow the country’s economy. “The launch of this facility is a clear indication of the owner’s confidence in the country and its evolving business environment. The launch of the lumber treatment plant certainly represents the kind of innovative response from businesses needed to drive our economy and country to the next stage of its development,” he said. Mr. Hylton was speaking at the official opening of the state-of-the-art facility at its Bell Road location in Kingston, on Wednesday, March 13. The Minister further noted that innovations of this type contribute to strengthening the country’s export capacity and is consistent with advancing the national export strategy, which seeks to improve the country’s export performance by advancing the competitiveness of firms and sectors, while enhancing the business and trade environment. “ARC Manufacturing, with its innovative plant, represents the next wave of value-added manufacturing that we expect will benefit directly from the implementation of the Global Logistics Hub, which will serve as a global supply chain platform to support the country’s export sector,” he said. Mr. Hylton further noted that the launch also signifies the company’s efforts to improve the competitiveness of the lumber supply sector in Jamaica. “Management’s efforts to push the company further up the value chain will inevitably position it to take advantage of export opportunities and to generate export earnings, while creating employment and increasing the Gross Domestic Product (GDP) of our country,” he said. The Minister praised Osmose Inc., a wood preserving company which collaborated with ARC to establish the facility, for the “strategic partnership in allowing the technology to come not simply to ARC but to Jamaica.” General Manager for Osmose Inc. in Latin America and the Caribbean, Dr. Javier Romero, said the facility, which is the first of its kind in Jamaica, “is the most modern and advanced plant I know in the Caribbean and the Latin-American region.”While the plant, which is fully automated, has the capacity to treat approximately 500,000 board feet (BF) of lumber per 40-hour week, Dr. Romero noted that it has the potential to treat 50 million BF per year, “allowing ARC to cover the domestic demand and also export markets.”He further noted that he is pleased that the company will be using the most environmentally friendly preservative in its treatment operations. The plant, which occupies approximately 11,000 square feet of space, is equipped with a pressure tank; four holding tanks; and a control room. It utilises a process called Pressure Treatment for its lumber and wood products. The type of treatment used is a form of micronized copper preservative, which is widely used in the construction industry in the United States.ARC is one of Jamaica’s leading manufacturers and distributors of building materials. The company locally manufactures construction wires and mesh, nails, fencing and roofing products. It also distributes lumber, cement, steel and steel products.By Alecia Smith-Edwards, JIS Reporterlast_img read more

ENTRIES NOW BEING ACCEPTED FOR THE 2018 CAJ AWARDS PROGRAM

Posted on

first_imgAdvertisement Facebook Login/Register With: Entries now being accepted for the 2018 CAJ Awards program (CNW Group/Canadian Association of Journalists) You can also go directly to the submission site by clicking here.Members always get the best rates and those considering an entry are encouraged to become a CAJ member as part of entering the awards. For example, CAJ members eligible to submit an individual entry into the Community Media or Community Broadcast categories will have their entry fee waived.“The past calendar year has seen Canadian journalists continue to respond to newsroom closures cutbacks and attrition in the best way they know how – by continuing to produce the kind of journalism that affects change, changes opinions and makes our communities better places to live,” said CAJ president Karyn Pugliese. “The CAJ Awards is one of the most rewarding programs offered by our association and we encourage all those journalists who did good work in 2018 to enter.”The CAJ Awards finalists will be announced by the mid- to end of March 2019, with the winners announced at the 2019 CAJ Conference scheduled for May 3-5, 2019 in Winnipeg.The CAJ is Canada’s largest national professional organization for journalists from all media, representing members across the country. The CAJ’s primary roles are to provide public-interest advocacy and high-quality professional development for its members.General awards questions can always be submitted to awards@caj.ca Advertisementcenter_img LEAVE A REPLY Cancel replyLog in to leave a comment OTTAWA, Dec. 11, 2018 – The Canadian Association of Journalists is pleased to announce entries are now being accepted for the 2018 CAJ Awards program, featuring Canada’s top investigative journalism award, the Don McGillivray Award.The deadline for entries is Jan. 14, 2019.Full information on the 2018 CAJ Awards is now posted on the Awards section of our website. Advertisement Twitterlast_img read more

IITKGP pilot project to protect heritage along Hooghly river

Posted on

first_imgKolkata: IIT Kharagpur, the oldest and largest one in the country, has initiated a pilot project to protect the rich cultural heritage of the cities and towns along the Hooghly river. The project would focus on five former trading posts and garrison settlements near Kolkata along the Hooghly river – Bandel, Chinsurah, Chandernagore, Serampore and Barrackpore, an IIT KGP statement said .The pockets bear traces of Portugese (Bandel), Dutch (Chinsurah), British (Barrackpore), French (Chandernagore), Danish (Serampore) presence, as well as India’s own rich culture. The pilot project has been initiated by IIT Kharagpur’s department of humanities and social sciences, in association with the University of Liverpool, UK, the statement said.Principal Investigator on behalf of IIT KGP, Prof Jenia Mukherjee said, “These places, being peripheral cities surrounding Kolkata, are not getting enough exposure. And yet, in these cities too, heritage buildings are making way for apartments, multiplexes and so on.”Among the top priorities of the project is the conservation of centuries-old buildings, which are mainly private houses, she said.Lack of funds makes maintenance difficult for even those willing to preserve their properties, Mukherjee said adding, “We will be seeing if it is possible to build up a public-private-partnership for the upkeep of these structures.” The project is being jointly funded by the Arts and Humanities Research Council, UK, and the Indian Council for Historical Research and the idea is to involve the people of the region as “owner-custodians” of this heritage, she said. The project team recently held an exhibition at Chandernagore with the Institut de Chandernagore which got an overwhelming public response. The Institut de Chandernagore is one of the oldest museums of the region and boasts a collection of French antiques.last_img read more

Styling with Khadi

Posted on

first_imgDuring the fashion show eight sequences like Make in India , Swachh Bharat Abhiyaan, Videsh se swadesh, Woollen wear, bridal wear were presented by leading fashion models. Khadi and Village Industries Commission (KVIC) of MSME Government of India participated in the 35th edition of India International Trade Fair (IITF) organised from November 14th to 27th at  Pragati Maidan. This year, the theme of IITF-2015 was “MAKE IN INDIA”. KVIC has set up biggest ever exclusive pavilion. The sales in khadi pavilion recorded approximately 20 lakh per day.  Also Read – ‘Playing Jojo was emotionally exhausting’Giriraj Singh Minister of State for MSME, inaugurated Khadi fashion show as chief guest in presence of Vinai Kumar Saxena, Chairman, KVIC  along with Anup K. Pujari, Secretary, B.H. Anil Kumar Jt. Secretary MSME and, Arun Kumar Jha, Chief Executive Officer, KVIC graced the occasion.First time in Khadi Pavilion a large statue of Mahatma Gandhi was put up at the main entrance of the Hall for  taking “selfie with Gandhiji”. This had become hotspot of the pavilion, as people from all age groups visited and took selfie with Gandhi ji in different poses. First time two hundred participants’, i.e artisans, craftsmen and khadi institutions from across the country had put up their most exclusive products in the pavilion. Ten nationalized banks have also associated with KVIC as sponsors and had put-up their stalls inside the pavilion. First time, the technical demonstration of spinning and weaving of khadi has been displayed in the Pavilion.  Also Read – Leslie doing new comedy special with NetflixIn India, Khadi is not just a cloth, it is a whole movement started by Mohandas Karamchand Gandhi. The Khadi movement promoted an ideology, an idea that Indians could be self-reliant on cotton and be free from the high priced goods and clothes which the British were selling to them. The British would buy cotton from India at cheap prices and export them to Britain where they were woven to make clothes. These clothes were then brought back to India to be sold at hefty prices. The khadi movement aimed at boycotting foreign goods including cotton and promoting Indian goods, thereby improving India’s economy. Mahatma Gandhi began promoting the spinning of khadi for rural self-employment and self-reliance (instead of using cloth manufactured industrially in Britain) in 1920s India thus making khadi an integral part and icon of the Swadeshi movement. The freedom struggle revolved around the use of khadi fabrics and the dumping of foreign-made clothes. When some people complained about the costliness of khadi to Mahatma Gandhi, he started wearing only dhoti.last_img read more

Get to know ASPNET Core Web API Tutorial

Posted on

first_imgASP.NET Web API is a framework that makes it easy to build HTTP services that reach a broad range of clients, including browsers and mobile devices. ASP.NET Web API is an ideal platform for building RESTful applications on the .NET Framework. In today’s post we shall be looking at the following topics: Quick recap of MVC framework Why Web APIs were incepted and it’s evolution? Introduction to .NET Core? Overview of ASP.NET Core architecture This article is an extract from the book Mastering ASP.NET Web API written by Mithun Pattankar and Malendra Hurbuns. Quick recap of MVC framework Model-View-Controller (MVC) is a powerful and elegant way of separating concerns within an application and applies itself extremely well to web applications. With ASP.NETMVC, it’s translated roughly as follows: Models (M): These are the classes that represent the domain you are interested in. These domain objects often encapsulate data stored in a database as well as code that manipulates the data and enforces domain-specific business logic. With ASP.NETMVC, this is most likely a Data Access Layer of some kind, using a tool like Entity Framework or NHibernate or classic ADO.NET. View (V): This is a template to dynamically generate HTML. Controller(C): This is a special class that manages the relationship between the View and the Model. It responds to user input, talks to the Model, and decides which view to render (if any). In ASP.NETMVC, this class is conventionally denoted by the suffix Controller. Why Web APIs were incepted and it’s evolution? Looking back to days when ASP.NETASMX-based XML web service was widely used for building service-oriented applications, it was easiest way to create SOAP-based service which can be used by both .NET applications and non .NET applications. It was available only over HTTP. Around 2006, Microsoft released Windows Communication Foundation (WCF).WCF was and even now a powerful technology for building SOA-based applications. It was giant leap in the world of Microsoft .NET world. WCF was flexible enough to be configured as HTTP service, Remoting service, TCP service, and so on. Using Contracts of WCF, we would keep entire business logic code base same and expose the service as HTTP based or non HTTP based via SOAP/ non SOAP. Until 2010 the ASMX based XML web service or WCF service were widely used in client server based applications, in-fact everything was running smoothly. But the developers of .NET or non .NET community started to feel need for completely new SOA technology for client server applications. Some of reasons behind them were as follows: With applications in production, the amount of data while communicating started to explode and transferring them over the network was bandwidth consuming. SOAP being light weight to some extent started to show signs of payload increase. A few KB SOAP packets were becoming few MBs of data transfer. Consuming the SOAP service in applications lead to huge applications size because of WSDL and proxy generation. This was even worse when it was used in web applications. Any changes to SOAP services lead to repeat of consuming them by proxy generation. This wasn’t easy task for any developers. JavaScript-based web frameworks were getting released and gaining ground for much simpler way of web development. Consuming SOAP-based services were not that optimal way. Hand-held devices were becoming popular like tablets, smartphones. They had more focused applications and needed very lightweight service oriented approach. Browser based Single Page Applications (SPA) was gaining ground very rapidly. Using SOAP based services for quite heavy for these SPA. Microsoft released REST based WCF components which can be configured to respond in JSON or XML, but even then it was WCF which was heavy technology to be used. Applications where no longer just large enterprise services, but there was need was more focused light weight service to be up & running in few days and much easier to use. Any developer who has seen evolving nature of SOA based technologies like ASMX, WCF or any SOAP based felt the need to have much lighter, HTTP based services. HTTP only, JSON compatible POCO based lightweight services was need of the hour and concept of Web API started gaining momentum. What is Web API? A Web API is a programmatic interface to a system that is accessed via standard HTTP methods and headers. A Web API can be accessed by a variety of HTTP clients, including browsers and mobile devices. For Web API to be successful HTTP based service, it needed strong web infrastructure like hosting, caching, concurrency, logging, security etc. One of the best web infrastructure was none other than ASP.NET. ASP.NET either in form Web Form or MVC was widely adopted, so the solid base for web infrastructure was mature enough to be extended as Web API. Microsoft responded to community needs by creating ASP.NET Web API- a super-simple yet very powerful framework for building HTTP-only, JSON-by-default web services without all the fuss of WCF. ASP.NET Web API can be used to build REST based services in matter of minutes and can easily consumed with any front end technologies. It used IIS (mostly) for hosting, caching, concurrency etc. features, it became quite popular. It was launched in 2012 with most basics needs for HTTP based services like convention-based Routing, HTTP Request and Response messages. Later Microsoft released much bigger and better ASP.NET Web API 2 along with ASP.NETMVC 5 in Visual Studio 2013. ASP.NET Web API 2 evolved at much faster pace with these features. Installed via NuGet Installing of Web API 2 was made simpler by using NuGet, either create empty ASP.NET or MVC project and then run command in NuGet Package Manager Console: Install-Package Microsoft.AspNet.WebApi Attribute Routing Initial release of Web API was based on convention-based routing meaning we define one or more route templates and work around it. It’s simple without much fuss as routing logic in a single place & it’s applied across all controllers. The real world applications are more complicated with resources (controllers/ actions) have child resources like customers having orders, books having authors etc. In such cases convention-based routing is not scalable. Web API 2 introduced a new concept of Attribute Routing which uses attributes in programming languages to define routes. One straight forward advantage is developer has full controls how URIs for Web API are formed. Here is quick snippet of Attribute Routing: [Route(“customers/{customerId}/orders”)]public IEnumerableGetOrdersByCustomer(intcustomerId) { … } For more understanding on this, read Attribute Routing in ASP.NET Web API 2 (https://www.asp.net/web-api/overview/web-api-routing-and-actions/attribute-routing-in-web-api-2) OWIN self-host ASP.NET Web API lives on ASP.NET framework, leading to think that it can be hosted on IIS only. The Web API 2 came new hosting package. Microsoft.AspNet.WebApi.OwinSelfHost With this package it can self-hosted outside IIS using OWIN/Katana. CORS (Cross Origin Resource Sharing) Any Web API developed either using .NET or non .NET technologies and meant to be used across different web frameworks, then enabling CORS is must. A must read on CORS&ASP.NET Web API 2 (https://www.asp.net/web-api/overview/security/enabling-cross-origin-requests-in-web-api). IHTTPActionResult and Web API OData improvements are other few notable features which helped evolve Web API 2 as strong technology for developing HTTP based services. ASP.NET Web API 2 has becoming more powerful over the years with C# language improvements like Asynchronous programming using Async/ Await, LINQ, Entity Framework Integration, Dependency Injection with DI frameworks, and so on. ASP.NET into Open Source world Every technology has to evolve with growing needs and advancements in hardware, network and software industry, ASP.NET Web API is no exception to that. Some of the evolution that ASP.NET Web API should undergo from perspectives of developer community, enterprises and end users are: NETMVC and Web API even though part of ASP.NET stack but their implementation and code base is different. A unified code base reduces burden of maintaining them. It’s known that Web API’s are consumed by various clients like web applications, Native apps, and Hybrid apps, desktop applications using different technologies (.NET or non .NET). But how about developing Web API in cross platform way, need not rely always on Windows OS/ Visual Studio IDE. Open sourcing the ASP.NET stack so that it’s adopted on much bigger scale. End users are benefitted with open source innovations. We saw that why Web APIs were incepted, how they evolved into powerful HTTP based service and some evolutions required. With these thoughts Microsoft made an entry into world of Open Source by launching .NET Core and ASP.NET Core 1.0. What is .NET Core? .NET Core is a cross-platform free and open-source managed software framework similar to .NET Framework. It consists of CoreCLR, a complete cross-platform runtime implementation of CLR. .NET Core 1.0 was released on 27 June 2016 along with Visual Studio 2015 Update 3, which enables .NET Core development. In much simpler terms .NET Core applications can be developed, tested, deployed on cross platforms such as Windows, Linux flavours, macOS systems. With help of .NET Core, we don’t really need Windows OS and in particular Visual Studio IDE to develop ASP.NET web applications, command-line apps, libraries, and UWP apps. In short, let’s understand .NET Core components: CoreCLR:It is a virtual machine that manages the execution of .NET programs. CoreCLRmeans Core Common Language Runtime, it includes the garbage collector, JIT compiler, base .NET data types and many low-level classes. CoreFX: .NET Core foundational libraries likes class for collections, file systems, console, XML, Async and many others. CoreRT: .NET Core runtime optimized for AOT (ahead of time compilation) scenarios, with the accompanying .NET Native compiler toolchain. Its main responsibility is to do native compilation of code written in any of our favorite .NET programming language. .NET Core shares subset of original .NET framework, plus it comes with its own set of APIs that is not part of .NET framework. This results in some shared APIs that can be used by both .NET core and .NET framework. A .Net Core application can easily work on existing .NET Framework but not vice versa. .NET Core provides a CLI (Command Line Interface) for an execution entry point for operating systems and provides developer services like compilation and package management. The following are the .NET Core interesting points to know: .NET Core can be installed on cross platforms like Windows, Linux, andmacOS. It can be used in device, cloud, and embedded/ IoT scenarios. Visual Studio IDE is not mandatory to work with .NET Core, but when working on Windows OS we can leverage existing IDE knowledge to work. .NET Core is modular, meaning that instead of assemblies, developers deal with NuGet packages. .NET Core relies on its package manager to receive updates because cross platform technology can’t rely on Windows Updates. To learn .NET Core, we just need a shell, text editor and its runtime installed. .NET Core comes with flexible deployment. It can be included in your app or installed side-by-side user- or machine-wide. .NET Core apps can also be self-hosted/run as standalone apps. .NET Core supports four cross-platform scenarios–ASP.NET Core web apps, command-line apps, libraries, and Universal Windows Platform apps. It does not implement Windows Forms or WPF which render the standard GUI for desktop software on Windows. At present only C# programming language can be used to write .NET Core apps. F# and VB support are on the way. We will primarily focus on ASP.NET Core web apps which includes MVC and Web API. CLI apps, libraries will be covered briefly. What is ASP.NET Core? A new open-source and cross-platform framework for building modern cloud-based web applications using .NET. ASP.NET Core is completely open-source, you can download it from GitHub. It’s cross platform meaning you can develop ASP.NET Core apps on Linux/macOS and of course on Windows OS. ASP.NET was first released almost 15 years back with .NET framework. Since then it’s adopted by millions of developers for large, small applications. ASP.NET has evolved with many capabilities. With .NET Core as cross platform, ASP.NET took a huge leap beyond boundaries of Windows OS environment for development and deployment of web applications. ASP.NET Core overview ASP.NET Core high level overview provides following insights: NET Core runs both on Full .NET framework and .NET Core. NET Core applications with full .NET framework can only be developed and deployed only Windows OS/Server. When using .NET core, it can be developed and deployed on platform of choice. The logos of Windows, Linux, macOSindicates that you can work with ASP.NET Core. NET Core when on non-Windows machine, use the .NET Core libraries to run the applications. It’s obvious you won’t have all full .NET libraries but most of them are available. Developers working on ASP.NET Core can easily switch working on any machine not confined to Visual Studio 2015 IDE. NET Core can run with different version of .NET Core. ASP.NET Core has much more foundational improvements apart from being cross-platform, we gain following advantages of using ASP.NET Core: Totally Modular: ASP.NET Core takes totally modular approach for application development, every component needed to build application are well factored into NuGet packages. Only add required packages through NuGet to keep overall application lightweight. NET Core is no longer based on System.Web.dll. Choose your editors and tools: Visual Studio IDE was used to develop ASP.NET applications on Windows OS box, now since we have moved beyond the Windows world. Then we will require IDE/editors/ Tools required for developingASP.NET applications on Linux/macOS. Microsoft developed powerful lightweight code editors for almost any type of web applications called as Visual Studio Code. NET Core is such a framework that we don’t need Visual Studio IDE/ code to develop applications. We can use code editors like Sublime, Vim also. To work with C# code in editors, installed and use OmniSharp plugin. OmniSharp is a set of tooling, editor integrations and libraries that together create an ecosystem that allows you to have a great programming experience no matter what your editor and operating system of choice may be. Integration with modern web frameworks: ASP.NET Core has powerful, seamless integration with modern web frameworks like Angular, Ember, NodeJS, and Bootstrap. Using bower andNPM, we can work with modern web frameworks. Cloud ready: ASP.NET Core apps are cloud ready with configuration system, it just seamlessly gets transitioned from on-premises to cloud. Built in Dependency Injection. Can be hosted on IIS or self-host in your own process or on nginx. New light-weight and modular HTTP request pipeline. Unified code base for Web UI and Web APIs. We will see more on this when we explore anatomy of ASP.NET Core application. To summarize, we covered MVC framework and introduced .NET Core and its architecture. You can leverage ASP.Net Web API to build professional web services and create powerful applications check out this book, Mastering ASP.NET Web API written by Mithun Pattankar and Malendra Hurbuns. Read Next: What is ASP.NET Core? Why ASP.NET makes building apps for mobile and web easy – Interview with Jason de Oliveira How to call an Azure function from an ASP.NET Core MVC applicationlast_img read more

5 ways artificial intelligence is upgrading software engineering

Posted on

first_img47% of digitally mature organizations, or those that have advanced digital practices, said they have a defined AI strategy (Source: Adobe). It is estimated that AI-enabled tools alone will generate $2.9 trillion in business value by 2021. 80% of enterprises are smartly investing in AI. The stats speak for themselves. AI clearly follows the motto “go big or go home”. This explosive growth of AI in different sectors of technology is also beginning to show its colors in software development. Shawn Drost, co-founder and lead instructor of coding boot camp ‘Hack Reactor’ says that AI still has a long way to go and is only impacting the workflow of a small portion of software engineers on a minority of projects right now. AI promises to change how organizations will conduct business and to make applications smarter. It is only logical then that software development, i.e., the way we build apps, will be impacted by AI as well. Forrester Research recently surveyed 25 application development and delivery (AD&D) teams, and respondents said AI will improve planning, development and especially testing. We can expect better software created under traditional environments. 5 areas of Software Engineering AI will transform The 5 major spheres of software development-  Software design, Software testing, GUI testing, strategic decision making, and automated code generation- are all areas where AI can help. A majority of interest in applying AI to software development is already seen in automated testing and bug detection tools. Next in line are the software design precepts, decision-making strategies, and finally automating software deployment pipelines. Let’s take an in-depth look into the areas of high and medium interest of software engineering impacted by AI according to the Forrester Research report. Source: Forbes.com #1 Software design In software engineering, planning a project and designing it from scratch need designers to apply their specialized learning and experience to come up with alternative solutions before settling on a definite solution. A designer begins with a vision of the solution, and after that retracts and forwards investigating plan changes until they reach the desired solution. Settling on the correct plan choices for each stage is a tedious and mistake-prone action for designers. Along this line, a few AI developments have demonstrated the advantages of enhancing traditional methods with intelligent specialists. The catch here is that the operator behaves like an individual partner to the client. This associate should have the capacity to offer opportune direction on the most proficient method to do design projects. For instance, take the example of AIDA- The Artificial Intelligence Design Assistant, deployed by Bookmark (a website building platform). Using AI, AIDA understands a users needs and desires and uses this knowledge to create an appropriate website for the user. It makes selections from millions of combinations to create a website style, focus, image and more that are customized for the user. In about 2 minutes, AIDA designs the first version of the website, and from that point it becomes a drag and drop operation. You can get a detailed overview of this tool on designshack. #2 Software testing Applications interact with each other through countless  APIs. They leverage legacy systems and grow in complexity everyday. Increase in complexity also leads to its fair share of challenges that can be overcome by machine-based intelligence. AI tools can be used to create test information, explore information authenticity, advancement and examination of the scope and also for test management. Artificial intelligence, trained right, can ensure the testing performed is error free. Testers freed from repetitive manual tests thus have more time to create new automated software tests with sophisticated features. Also, if software tests are repeated every time source code is modified, repeating those tests can be not only time-consuming but extremely costly. AI comes to the rescue once again by automating the testing for you! With AI automated testing, one can increase the overall scope of tests leading to an overall improvement of software quality. Take, for instance, the Functionize tool. It enables users to test fast and release faster with AI enabled cloud testing. The users just have to type a test plan in English and it will be automatically get converted into a functional test case. The tool allows one to elastically scale functional, load, and performance tests across every browser and device in the cloud. It also includes Self-healing tests that update autonomously in real-time.SapFix is another AI Hybrid tool deployed by Facebook which can automatically generate fixes for specific bugs identified by ‘Sapienz’. It then proposes these fixes to engineers for approval and deployment to production. #3 GUI testing Graphical User Interfaces (GUI) have become important in interacting with today’s software. They are increasingly being used in critical systems and testing them is necessary to avert failures. With very few tools and techniques available to aid in the testing process, testing GUIs is difficult. Currently used GUI testing methods are ad hoc. They require the test designer to perform humongous tasks like manually developing test cases, identifying the conditions to check during test execution, determining when to check these conditions, and finally evaluate whether the GUI software is adequately tested. Phew! Now that is a lot of work. Also, not forgetting that if the GUI is modified after being tested, the test designer must change the test suite and perform re-testing. As a result, GUI testing today is resource intensive and it is difficult to determine if the testing is adequate. Applitools is a GUI tester tool empowered by AI. The Applitools Eyes SDK automatically tests whether visual code is functioning properly or not. Applitools enables users to test their visual code just as thoroughly as their functional UI code to ensure that the visual look of the application is as you expect it to be. Users can test how their application looks in multiple screen layouts to ensure that they all fit the design. It allows users to keep track of both the web page behaviour, as well as the look of the webpage. Users can test everything they develop from the functional behavior of their application to its visual look. #4 Using Artificial Intelligence in Strategic Decision-Making Normally, developers have to go through a long process to decide what features to include in a product. However, machine learning AI solution trained on business factors and past development projects can analyze the performance of existing applications and help both teams of engineers and business stakeholders like project managers to find solutions to maximize impact and cut risk. Normally, the transformation of business requirements into technology specifications requires a significant timeline for planning. Machine learning can help software development companies to speed up the process, deliver the product in lesser time, and increase revenue within a short span. AI canvas is a well known tool for Strategic Decision making.The canvas helps identify the key questions and feasibility challenges associated with building and deploying machine learning models in the enterprise. The AI Canvas is a simple tool that helps enterprises organize what they need to know into seven categories, namely- Prediction, Judgement, Action, Outcome, Input, Training and feedback. Clarifying these seven factors for each critical decision throughout the organization will help in identifying opportunities for AIs to either reduce costs or enhance performance. #5 Automatic Code generation/Intelligent Programming Assistants Coding a huge project from scratch is often labour intensive and time consuming. An Intelligent AI programming assistant will reduce the workload by a great extent. To combat the issues of time and money constraints, researchers have tried to build systems that can write code before, but the problem is that these methods aren’t that good with ambiguity. Hence, a lot of details are needed about what the target program aims at doing, and writing down these details can be as much work as just writing the code. With AI, the story can be flipped. ”‘Bayou’- an A.I. based application is an Intelligent programming assistant. It began as an initiative aimed at extracting knowledge from online source code repositories like GitHub. Users can try it out at askbayou.com. Bayou follows a method called neural sketch learning. It trains an artificial neural network to recognize high-level patterns in hundreds of thousands of Java programs. It does this by creating a “sketch” for each program it reads and then associates this sketch with the “intent” that lies behind the program. This DARPA initiative aims at making programming easier and less error prone. Sounds intriguing? Now that you know how this tool works, why not try it for yourself on i-programmer.info. Summing it all up Software engineering has seen massive transformation over the past few years. AI and software intelligence tools aim to make software development easier and more reliable. According to a Forrester Research report on AI’s impact on software development, automated testing and bug detection tools use AI the most to improve software development. It will be interesting to see the future developments in software engineering empowered with AI. I’m expecting faster, more efficient, more effective, and less costly software development cycles while engineers and other development personnel focus on bettering their skills to make advanced use of AI in their processes. Read Next Implementing Software Engineering Best Practices and Techniques with Apache MavenIntelligent Edge Analytics: 7 ways machine learning is driving edge computing adoption in 201815 millions jobs in Britain at stake with AI robots set to replace humans at workforcelast_img read more

Airbnb says revenue for Q3 2018 was best ever topping 1 billion

Posted on

first_img Monday, November 19, 2018 Posted by NEW YORK — Airbnb has reported its best quarter ever, even as cities across the U.S. have started clamping down on the short-term rental market.Revenue during the third quarter breezed past the US$1 billion level as guest reservations boomed internationally in places like Beijing, Mexico City and Birmingham, England, the San Francisco company said Friday.Airbnb expects a record one million guests to stay at Airbnb listings across the U.S. during the Thanksgiving holiday.Airbnb acts as an online booking agent for homeowners to make extra income by renting rooms, apartments and houses. Its growth has drawn the ire of the hotel industry and communities in the U.S. and abroad, where locals are uneasy with the constant turnaround of guests in their neighbourhoods and apartment buildings.In some markets, like New York and Miami, there is evidence that home-sharing has cut into hotel profits, pushing some larger chains to get in on the action. Last month Marriott said it was expanding its home-sharing pilot in London to three additional European cities, while Hyatt announced it was pulling out of a money-losing collaboration with luxury home-sharing company Oasis.More news:  A new low for no-frills flying: easyJet assigns backless seat to passengerMany cities and states across the U.S. have tightened rental guidelines in order to regulate the rapidly growing industry. New York and Washington have both imposed strict limits on short-term rental companies, and housing-starved San Francisco has done the same, in addition to suing or fining homeowners who illegally rent their homes.In Europe, officials in top travel destinations are grappling with the massive growth in home-sharing. Residents of Venice and Barcelona have staged repeated protests, saying the influx of visitors is driving up rents and forcing out locals. Parisians are complaining about the onslaught of tourists in their neighbourhoods and buildings, late-night parties and drunken revellers.Airbnb listings in Paris have grown to 65,000, from just 4,000 in 2012. << Previous PostNext Post >> Airbnb says revenue for Q3 2018 was best ever, topping $1 billion The Canadian Press Sharelast_img read more

February 20 2012One of the announcements in our r

Posted on

first_imgFebruary 20, 2012One of the announcements in our report from Friday, Feb. 17. 2012 talked about an interview with Paolo Soleri by Prof. Constance Devereaux, Program Coordinator NAU Arts and Cultural Management. Prof. Constance Devereaux visited Arcosanti with a group of students from the NAU ACM 410 class to use Arcosanti as a case study for examining leadership transition and strategic planning, as these are two significant issues with which an arts and cultural manager should have experience.[photo: Chihiro Saito & text: Jeff Stein, Sue]The opportunity to observe and study these two areas in an internationally-known organization which itself is working on these areas would provide an incredible experiential learning opportunity. 3 to 4 visits are planned during the semester.[photo: Chihiro Saito & text: Jeff Stein]Students would formulate a case-study research question that would ideally be a real issue that Arcosanti is interested in. Students would then conduct the study and produce a written report with recommendations.[photo: Chihiro Saito & text: Jeff Stein]last_img read more

Polish pay TV operator nc is to launch a new pre

Posted on

first_imgPolish pay TV operator nc+ is to launch a new pre-paid service on October 1, offering over 50 channels, including 19 in HD, from PLN19.90 (€4.75) a month.The operator is providing new users with free access to 130 channels, including premium Canal+ channels, for the first month, after which they can choose to subscribe or take the stripped down pre-paid offering.The nc+ telewizja na kartę offering provides a range of channels via the ITI 5800SX HD DVR decoder for upwards of PLN19.90 a month, with a one-off top up offered after three months.Piotr Malicki, sales director for pre-paid products and services, said that the new offers was “an excellent option for those who do not want to be tied to long-term contracts” and offered an extensive range of HD services including Ale kino+ HD, Planete+ HD, Eurosport HD and Eurosport2 HD, TVP Sport HD, TVN24 HD and Romance TV HD.last_img read more

Al Jazeera Media Network is reportedly planning to

Posted on

first_imgAl Jazeera Media Network is reportedly planning to make its English-language international news channel available to viewers in the US via the web.According to an email seen by Reuters, Al Jazeera plans to make Al Jazeera English available in the US across digital platforms in September and is in talks with cable carriers about hosting the live stream.The news comes three months after Al Jazeera America closed its cable and digital operations, citing an unsustainable business model. The US network stopped operating at the end of April, less than three years after its launch.In an email to employees at the time, Al Jazeera America CEO Al Anstey said: “our business model is simply not sustainable in light of the economic challenges in the US media marketplace”.However, at the same time, parent company Al Jazeera Media Network revealed plans to expand its global digital operations into the US so that it can better compete in an “overwhelmingly digital world” and serve “today’s 24-hour digitally focused audience”.last_img read more