본문 바로가기
Organizational Leadership Program/OLP_Development

[Step-by-Step] People Analytics with Artificial Intelligence

by Jeonghwan (Jerry) Choi 2025. 1. 27.
반응형

[Step-by-Step] People Analytics with Artificial Intelligence

 

 

Install or Update Python (if necessary):

Download the latest version of Python from the official Python website.

https://www.python.org/
Follow the installation instructions for your operating system.

 

Welcome to Python.org

The official home of the Python Programming Language

www.python.org

 

Refer to the Beginner’ ners Guide 

https://www.python.org/about/gettingstarted/

 

Python For Beginners

The official home of the Python Programming Language

www.python.org

 

Open "Terminal" and type in python3 -- version  to check the version.

And, simply type in python3 to check the further options.  

Type print ("Hello World") to check functining the Python. 

 

 

Quit or Control Z exit. 

Python Launcher Click

 

 

IDLE tool 

 

Close and return Mac "Terminal" and 

cd Document :Change Directory Document

ls: List

mkdir pythonProjects: Make Directory of pythonProjects

cd pythonProjects: Change Directory to pythonProjects

touch hello.py: Create a hello.py Python file. 

 

Double clik the hello.py, and it open the file at the IDLE software

Input python programming at the ***.py program at IDLE, then save. 

At Terminal, type in below to run the python code.

python3  ****.py

 

Check the python manager program (pip3) to handle the third party programs. 

pip3 --version 

 

Installing Langchain

 

At Macbook Terminal, Check version and Activate Virtual Environment

python3 -- version  : Check python version version 

python3 -m venv langchain_env  : Create a folder of langchain_env (langchain environment)

source langchain_env/bin/activate : Activate Virtual Environment 

 

Install Langchain via pip (Python Management Program)

python3 -m pip install langchain : installing langchain (collecting and downloading all langchain and install them)

 

optional: Install specific integration (OpenAI, Deepseek, Vector Database like Pinecone, and Web scraping) 

python3 -m pip install openai: Install specific integrations for OpenAI API

python3 -m pip install deepseek: Install specific integrations for DeepSeek

python3 -m pip install pinecone-client : Install specific integrations for pinecone

python3 -m pip install requests beautifulsoup4 : Install specific integrations for web scraping

 

Install All Extras: 

python3 -m pip install langchain[all] : Install all extras of langchain

 

At terminal, Upgrade Langchain: 

python3 -m pip install --upgrade langchain  

python3 -m pip install langchain-community

 

Import OpenAI module:   

Open a Python3 interactive shell 

python3    : open the python3 interactive shell , then, cursor changed to  >>>>   (this cursor indicate the python environment)

 

Set the API Key as an Environment Variable (OpenAI) at the outside of Python shell (type Exit and get out the Python shell, and input this) 

export OPENAI_API_KEY="Your OpenAI API Code here"

 

Option: Use a .env file 

makefile outside of python environment: 
Touch OPENAI_API_KEY=OpenAI API Key.env 

Install the python-dotenv package:
python3 -m pip install python-dotenv

Load the .env file in your Python script: (

Python3    : go into the Python environment  >>>>
from dotenv import load_dotenv
import os

load_dotenv()    :  # Load environment variables from the .env file
api_key = os.getenv("OPENAI_API_KEY")

from langchain.llms import OpenAI
llm = OpenAI(temperature=0.7, openai_api_key=api_key)
response = llm("Tell me a joke.")
print(response)

 

Verify the Environment Variable: To check if the OPENAI_API_KEY is set, run the following command in your terminal:

echo $OPENAI_API_KEY

 

 

from langchain.llms import OpenAI  :  Import Langchain.llms module of OpenAI

 

"Remark: Open AI API should be charged with sufficient amont of money" 

 

Testing LangChain Script

llm = OpenAI(temperature=0.7)

llm = OpenAI(temperature=0.7, openai_api_key="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")

response = llm("Tell me a joke.")

print(response)

 

Results: 

Why couldn't the bicycle stand up by itself? Because it was two-tired.

 


 

Below is a concise guide on how you can create and manage a multi-agent workflow—Writer, Reviewer, and Editor—using LangChain and Python to enhance your academic writing in the management field. You can optionally integrate R for statistical checks or data analysis.


1. Overview of the Multi-Agent System

  1. Writer Agent:
    • Generates drafts based on a prompt (e.g., a research question or topic).
    • Adheres to Academy of Management (AOM) guidelines, ensures theoretical underpinnings, and employs scholarly style.
  2. Reviewer Agent:
    • Critiques the draft, checks logical consistency, identifies gaps, and suggests improvements.
    • Ensures coherence with established management theories and references.
  3. Editor Agent:
    • Refines language, corrects grammar, aligns with APA style, and polishes the manuscript for clarity.
    • Maintains consistent tone and scholarly voice.

2. Setting Up Your Environment

  1. Install Python 3.7+
    • Verify installation with:
      bash
      CopyEdit
      python3 --version
  2. Create a Virtual Environment (recommended)
  3. bash
    CopyEdit
    python3 -m venv multiagent_env source multiagent_env/bin/activate # Mac/Linux # or .\multiagent_env\Scripts\activate on Windows
  4. Install Necessary Packages
  5. bash
    CopyEdit
    pip install langchain openai # Optional for advanced usage pip install chromadb faiss-cpu # for vector storage pip install pandas spacy nltk # for text analysis pip install rpy2 # to integrate R if needed

3. Drafting a Basic Multi-Agent Framework

Step 1: Imports and Configuration

python
CopyEdit
import os from langchain.llms import OpenAI from langchain.agents import initialize_agent, AgentType from langchain.tools import Tool from langchain.memory import ConversationBufferMemory from langchain.schema import SystemMessage, HumanMessage # Make sure you set your OpenAI API key as an environment variable: # export OPENAI_API_KEY='your_api_key_here' OPENAI_API_KEY = os.getenv("OPENAI_API_KEY", "your_api_key_here") llm = OpenAI(api_key=OPENAI_API_KEY, temperature=0.7, model_name="gpt-4") memory = ConversationBufferMemory()

Step 2: Define Agent Functions

  1. Writer Agent
  2. python
    CopyEdit
    def writer_agent(prompt: str) -> str: """Generates an initial academic draft following AOM guidelines.""" system_prompt = """ You are a high-level academic writer. Adhere to Academy of Management (AOM) guidelines, use a clear, scholarly voice, and incorporate appropriate theoretical frameworks. """ response = llm.invoke([ SystemMessage(content=system_prompt), HumanMessage(content=prompt) ]) return response.content
  3. Reviewer Agent
  4. python
    CopyEdit
    def reviewer_agent(draft: str) -> str: """Critiques the draft, checking for logical consistency, coherence, and theoretical rigor.""" system_prompt = """ You are an expert peer reviewer for a top-tier management journal. Provide detailed feedback, suggesting theoretical enhancements and structural improvements. """ response = llm.invoke([ SystemMessage(content=system_prompt), HumanMessage(content=draft) ]) return response.content
  5. Editor Agent
  6. python
    CopyEdit
    def editor_agent(draft: str) -> str: """Edits the draft for grammar, style, and APA compliance.""" system_prompt = """ You are a professional academic editor. Ensure APA style, correct grammar, and improve overall readability and clarity. """ response = llm.invoke([ SystemMessage(content=system_prompt), HumanMessage(content=draft) ]) return response.content

Step 3: Convert Each Function into a LangChain Tool

python
CopyEdit
agents = [ Tool(name="Writer", func=writer_agent, description="Generate academic content based on AOM guidelines."), Tool(name="Reviewer", func=reviewer_agent, description="Critique and review academic content."), Tool(name="Editor", func=editor_agent, description="Edit academic content to ensure clarity and APA style.") ]

Step 4: Initialize the Multi-Agent System

python
CopyEdit
research_agent = initialize_agent( tools=agents, llm=llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True, memory=memory )

Step 5: Create a Workflow

python
CopyEdit
# Step 1: Ask the Writer to generate a draft topic = "The Influence of Psychological Capital on Organizational Citizenship Behavior" draft = research_agent.run(f"Writer: Please generate a thorough academic draft on '{topic}'.") # Step 2: Pass the draft to the Reviewer feedback = research_agent.run(f"Reviewer: {draft}") # Step 3: Pass the feedback to the Editor for final polishing final_draft = research_agent.run(f"Editor: {feedback}") print("=== Final Draft ===") print(final_draft)

4. Integrating R for Statistical and Empirical Analysis (Optional)

If you want to run R scripts or packages for advanced statistical modeling:

  1. Install R and rpy2
  2. bash
    CopyEdit
    pip install rpy2
  3. Use R from Python
  4. python
    CopyEdit
    import rpy2.robjects as ro from rpy2.robjects.packages import importr # Example: Load ggplot2 in R ggplot2 = importr('ggplot2') # Example: Create R data frame and run linear regression ro.r('df <- data.frame(x = rnorm(50), y = rnorm(50))') ro.r('model <- lm(y ~ x, data=df)') summary_output = ro.r('summary(model)') print(summary_output)

You can then integrate statistical findings back into your writing process by allowing the Writer Agent to incorporate those results or having the Reviewer Agent critique them.


5. Potential Enhancements

  1. Automated Citation Management
    • Integrate a citation tool or a vector database (e.g., Chroma, FAISS) containing research papers and references.
    • Let your agents retrieve relevant sources and insert citations.
  2. Template-Based Outputs
    • Provide journal-specific templates (e.g., Academy of Management Journal) for the Writer Agent to follow.
  3. Section-by-Section Review
    • Have agents work section by section (Introduction, Literature Review, Methods, Results, Discussion, Conclusion) for more granular feedback.
  4. Version Control
    • Consider storing each agent’s output in a version control system (e.g., Git) for tracking changes and reverting if needed.

6. Best Practices and Tips

  • Use a High-Quality LLM: For sophisticated academic text, models like GPT-4 can produce more coherent, in-depth analysis.
  • Temperature Settings: Experiment with lower temperatures for more factual and consistent outputs.
  • Prompt Engineering: Provide clear instructions to each agent, referencing style guidelines, desired length, tone, and references as needed.
  • Citation Checks: Regularly verify references to ensure accuracy and academic integrity.

Conclusion

By defining Writer, Reviewer, and Editor agents in LangChain, you can create a streamlined academic writing pipeline tailored to AOM standards. Incorporating R for empirical analysis can further enrich your manuscripts with robust, data-driven insights.

Feel free to adapt the approach to fit your exact needs—whether that’s additional agents, deeper citation management, or more advanced statistical methods. Good luck refining your multi-agent academic writing system!

 
o1

 

 

 

 

 

 

===========

Langchain example (Memory, Korean) 

더보기

from langchain.memory import ConversationBufferMemory

from langchain.chains import ConversationChain
from langchain.chat_models import ChatOpenAI
# OpenAI API 키 설정 (필요시 환경 변수로 설정하거나 입력)
import os
os.environ["OPENAI_API_KEY"] = "your_openai_api_key"
# LangChain 구성 요소 초기화
llm = ChatOpenAI(temperature=0.7)
memory = ConversationBufferMemory() # 대화 내용을 저장할 메모리 생성
# 가상의 이전 대화 내용 추가
fake_history = [
{"role": "user", "content": "안녕하세요, 배송이 늦어지고 있어요."},
{"role": "assistant", "content": "불편을 드려 죄송합니다. 주문 번호를 알려주시면 확인해드리겠습니다."},
{"role": "user", "content": "주문 번호는 12345입니다."},
{"role": "assistant", "content": "주문을 확인해보니 현재 배송 중이며, 2~3일 내에 도착할 예정입니다."}
]
for message in fake_history:
memory.chat_memory.add_user_message(message["content"]) if message["role"] == "user" else memory.chat_memory.add_ai_message(message["content"])
conversation = ConversationChain(llm=llm, memory=memory)
# 고객과의 대화 시뮬레이션
print("[시스템]: 고객 상담 봇에 오신 것을 환영합니다! 질문을 입력해주세요. (종료하려면 'exit' 입력)")
while True:
user_input = input("[고객]: ")
if user_input.lower() == 'exit':
print("[시스템]: 상담을 종료합니다. 좋은 하루 보내세요!")
break
# 과거 대화 기록 활용하여 응답 생성
if memory.load_memory_variables({})["history"]:
print("[시스템]: 이전 대화를 바탕으로 답변을 생성 중입니다...")
else:
print("[시스템]: 새로운 대화를 시작합니다...")
# LangChain 대화 체인을 통해 응답 생성
response = conversation.run(user_input)
print(f"[시스템]: {response}")
# 메모리에 저장된 대화 내용 확인 (선택적)
print("\n[메모리에 저장된 대화 기록]:")
print(memory.load_memory_variables({})["history"])

 

 

 

 

=============

 

 

 

2025. 01. 26: Initially archive. 

 

=============

OpenAI Developer Platform for Python: 

from openai import OpenAI
client = OpenAI()
completion = client.chat.completions.create(
    model="gpt-4o",
    store=True,
    messages=[
        {"role": "user", "content": "write a haiku about ai"}
    ]
)

 

OpenAI Assistant API Quickstart: 

https://platform.openai.com/docs/assistants/quickstart

 

 

 

 

댓글