Addresses need in production, collaboration, execution, and compliance.
KNIME has rolled out its mid-year platform release today for its open-source KNIME Analytics Platform and its commercial KNIME Server.
Here’s what new:
Production: Integrated Deployment is now GA to seamlessly move from model creation to production
Collaboration: Guided Analytics has a revamped UX/UI, balancing automation and human interaction
Execution: Executor Groups and Reservations enable more flexible, hybrid, and elastic scaling
Compliance: Metadata Mapping is new for full documentation and simplified auditing of workflows
“For predictive models, there is usually a hard stop after creation, and manual steps are needed for deployment, resulting in lots of friction in the enterprise,” said Michael Berthold, CEO and Co-founder of KNIME. “With our latest developments, we enable enterprises to put their models into production in an integrated, scalable, and compliant way.”
Following are the details of the features released:
Production: Integrated Deployment Closes the Gap From Model Creation to Production
KNIME’s Integrated Deployment generates real value for the enterprise through a seamless transition from model preparation to production and optimization. Along with opening Integrated Deployment to its entire user base, KNIME publishes new customizable solutions for guided machine learning and continuous deployment.
Integrated Deployment, which is unique to the industry, writes out all relevant parts of a workflow to be used in production and manages it from the same platform. It not only includes the model chosen for production but also any other relevant part of the creation workflow, such as customized data preprocessing. This technology is easy to use, error-proof, and resource-efficient for productionizing data science, and it also enables continuous optimization by providing an infrastructure to monitor and automatically update workflows in production.
Integrated Deployment writes out the workflow for model creation and data preprocessing into production.
Collaboration: Guided Analytics Balances Custom Automation and Human Interaction
Guided Analytics applications can be customized based on reusable components available on the KNIME Hub and made available to the end-user via KNIME WebPortal. The web portal has just been released with a completely new UX/UI and gives non-experts intuitive access to data science by empowering data science experts and business consumers to work together in an integrated way.
Integrated Deployment workflow created with guided machine learning in the new web portal.
Guided Analytics enables interaction between expert and business users at the appropriate level of complexity.
Execution: KNIME Executors Now Enable More Flexible, Hybrid and Elastic Deployments
KNIME’s flexible execution options leverage enterprise infrastructure choices while covering periods of high demand dynamically. This enables IT to meet computing capacity requirements while controlling cost, for example, by mixing and matching on-prem data centers with cloud resources. Additionally, arbitrary workflows can now be executed on an elastic scaling environment with only one click.
Executor Groups and Reservation are new features in KNIME Server, which is now available on the AWS marketplace as bring-your-own-license, while AWS Auto Scaling can also be used on a pay-as-you-go basis with KNIME.
Executor Groups allocate the appropriate resources to business units in the organization.
Compliance: Metadata Mapping Delivers Full Documentation for Simplified Auditing
KNIME’s Metadata Mapping with Workflow Summary makes compliance and governance remarkably easy. It can be used to extract and export every detail of the user’s KNIME workflows — from execution, execution environment, individual node settings, and data sources to high-level, interactive summaries.
This complete documentation can be formatted via simple workflows and doesn’t require additional tools or products. Beyond governance, this capability is used to actively monitor workflow quality and keep all elements up to date, surfacing segments where remote programming and manual transformations have been enabled as well as which changes to data will impact applications and processes further down the line.